May 15 00:28:06.906321 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 15 00:28:06.906343 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed May 14 22:53:13 -00 2025 May 15 00:28:06.906353 kernel: KASLR enabled May 15 00:28:06.906358 kernel: efi: EFI v2.7 by EDK II May 15 00:28:06.906364 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 May 15 00:28:06.906370 kernel: random: crng init done May 15 00:28:06.906377 kernel: ACPI: Early table checksum verification disabled May 15 00:28:06.906383 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) May 15 00:28:06.906389 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) May 15 00:28:06.906397 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:28:06.906403 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:28:06.906409 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:28:06.906415 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:28:06.906421 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:28:06.906429 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:28:06.906437 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:28:06.906443 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:28:06.906450 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:28:06.906456 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 15 00:28:06.906462 kernel: NUMA: Failed to initialise from firmware May 15 00:28:06.906469 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 15 00:28:06.906475 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 15 00:28:06.906481 kernel: Zone ranges: May 15 00:28:06.906488 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 15 00:28:06.906494 kernel: DMA32 empty May 15 00:28:06.906502 kernel: Normal empty May 15 00:28:06.906508 kernel: Movable zone start for each node May 15 00:28:06.906514 kernel: Early memory node ranges May 15 00:28:06.906521 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 15 00:28:06.906527 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 15 00:28:06.906534 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 15 00:28:06.906552 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 15 00:28:06.906559 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 15 00:28:06.906566 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 15 00:28:06.906572 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 15 00:28:06.906579 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 15 00:28:06.906585 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 15 00:28:06.906594 kernel: psci: probing for conduit method from ACPI. May 15 00:28:06.906600 kernel: psci: PSCIv1.1 detected in firmware. May 15 00:28:06.906607 kernel: psci: Using standard PSCI v0.2 function IDs May 15 00:28:06.906616 kernel: psci: Trusted OS migration not required May 15 00:28:06.906623 kernel: psci: SMC Calling Convention v1.1 May 15 00:28:06.906630 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 15 00:28:06.906638 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 15 00:28:06.906645 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 15 00:28:06.906652 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 15 00:28:06.906658 kernel: Detected PIPT I-cache on CPU0 May 15 00:28:06.906665 kernel: CPU features: detected: GIC system register CPU interface May 15 00:28:06.906672 kernel: CPU features: detected: Hardware dirty bit management May 15 00:28:06.906679 kernel: CPU features: detected: Spectre-v4 May 15 00:28:06.906686 kernel: CPU features: detected: Spectre-BHB May 15 00:28:06.906692 kernel: CPU features: kernel page table isolation forced ON by KASLR May 15 00:28:06.906699 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 15 00:28:06.906707 kernel: CPU features: detected: ARM erratum 1418040 May 15 00:28:06.906714 kernel: CPU features: detected: SSBS not fully self-synchronizing May 15 00:28:06.906721 kernel: alternatives: applying boot alternatives May 15 00:28:06.906729 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3ad4d2a855aaa69496d8c2bf8d7e3c4212e29ec2df18e8282fb10689c3032596 May 15 00:28:06.906736 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 00:28:06.906743 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 00:28:06.906750 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 00:28:06.906757 kernel: Fallback order for Node 0: 0 May 15 00:28:06.906763 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 15 00:28:06.906770 kernel: Policy zone: DMA May 15 00:28:06.906777 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 00:28:06.906785 kernel: software IO TLB: area num 4. May 15 00:28:06.906792 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 15 00:28:06.906799 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) May 15 00:28:06.906806 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 00:28:06.906813 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 00:28:06.906820 kernel: rcu: RCU event tracing is enabled. May 15 00:28:06.906828 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 00:28:06.906834 kernel: Trampoline variant of Tasks RCU enabled. May 15 00:28:06.906841 kernel: Tracing variant of Tasks RCU enabled. May 15 00:28:06.906848 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 00:28:06.906855 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 00:28:06.906862 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 15 00:28:06.906870 kernel: GICv3: 256 SPIs implemented May 15 00:28:06.906877 kernel: GICv3: 0 Extended SPIs implemented May 15 00:28:06.906883 kernel: Root IRQ handler: gic_handle_irq May 15 00:28:06.906890 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 15 00:28:06.906897 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 15 00:28:06.906904 kernel: ITS [mem 0x08080000-0x0809ffff] May 15 00:28:06.906911 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 15 00:28:06.906918 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 15 00:28:06.906937 kernel: GICv3: using LPI property table @0x00000000400f0000 May 15 00:28:06.906944 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 15 00:28:06.906951 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 00:28:06.906959 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:28:06.906966 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 15 00:28:06.906973 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 15 00:28:06.906980 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 15 00:28:06.906987 kernel: arm-pv: using stolen time PV May 15 00:28:06.906994 kernel: Console: colour dummy device 80x25 May 15 00:28:06.907001 kernel: ACPI: Core revision 20230628 May 15 00:28:06.907008 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 15 00:28:06.907015 kernel: pid_max: default: 32768 minimum: 301 May 15 00:28:06.907022 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 15 00:28:06.907031 kernel: landlock: Up and running. May 15 00:28:06.907038 kernel: SELinux: Initializing. May 15 00:28:06.907045 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 00:28:06.907052 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 00:28:06.907060 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 15 00:28:06.907067 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 00:28:06.907075 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 00:28:06.907082 kernel: rcu: Hierarchical SRCU implementation. May 15 00:28:06.907089 kernel: rcu: Max phase no-delay instances is 400. May 15 00:28:06.907097 kernel: Platform MSI: ITS@0x8080000 domain created May 15 00:28:06.907105 kernel: PCI/MSI: ITS@0x8080000 domain created May 15 00:28:06.907112 kernel: Remapping and enabling EFI services. May 15 00:28:06.907119 kernel: smp: Bringing up secondary CPUs ... May 15 00:28:06.907126 kernel: Detected PIPT I-cache on CPU1 May 15 00:28:06.907133 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 15 00:28:06.907140 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 15 00:28:06.907147 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:28:06.907154 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 15 00:28:06.907163 kernel: Detected PIPT I-cache on CPU2 May 15 00:28:06.907170 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 15 00:28:06.907178 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 15 00:28:06.907190 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:28:06.907198 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 15 00:28:06.907206 kernel: Detected PIPT I-cache on CPU3 May 15 00:28:06.907214 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 15 00:28:06.907221 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 15 00:28:06.907229 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:28:06.907236 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 15 00:28:06.907244 kernel: smp: Brought up 1 node, 4 CPUs May 15 00:28:06.907253 kernel: SMP: Total of 4 processors activated. May 15 00:28:06.907260 kernel: CPU features: detected: 32-bit EL0 Support May 15 00:28:06.907268 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 15 00:28:06.907276 kernel: CPU features: detected: Common not Private translations May 15 00:28:06.907284 kernel: CPU features: detected: CRC32 instructions May 15 00:28:06.907291 kernel: CPU features: detected: Enhanced Virtualization Traps May 15 00:28:06.907301 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 15 00:28:06.907314 kernel: CPU features: detected: LSE atomic instructions May 15 00:28:06.907322 kernel: CPU features: detected: Privileged Access Never May 15 00:28:06.907330 kernel: CPU features: detected: RAS Extension Support May 15 00:28:06.907337 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 15 00:28:06.907344 kernel: CPU: All CPU(s) started at EL1 May 15 00:28:06.907352 kernel: alternatives: applying system-wide alternatives May 15 00:28:06.907360 kernel: devtmpfs: initialized May 15 00:28:06.907368 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 00:28:06.907376 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 00:28:06.907386 kernel: pinctrl core: initialized pinctrl subsystem May 15 00:28:06.907394 kernel: SMBIOS 3.0.0 present. May 15 00:28:06.907402 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 May 15 00:28:06.907409 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 00:28:06.907417 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 15 00:28:06.907424 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 15 00:28:06.907432 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 15 00:28:06.907440 kernel: audit: initializing netlink subsys (disabled) May 15 00:28:06.907447 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 May 15 00:28:06.907456 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 00:28:06.907464 kernel: cpuidle: using governor menu May 15 00:28:06.907471 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 15 00:28:06.907479 kernel: ASID allocator initialised with 32768 entries May 15 00:28:06.907486 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 00:28:06.907494 kernel: Serial: AMBA PL011 UART driver May 15 00:28:06.907501 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 15 00:28:06.907509 kernel: Modules: 0 pages in range for non-PLT usage May 15 00:28:06.907516 kernel: Modules: 509008 pages in range for PLT usage May 15 00:28:06.907525 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 00:28:06.907533 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 15 00:28:06.907551 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 15 00:28:06.907560 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 15 00:28:06.907567 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 00:28:06.907575 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 15 00:28:06.907583 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 15 00:28:06.907590 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 15 00:28:06.907598 kernel: ACPI: Added _OSI(Module Device) May 15 00:28:06.907608 kernel: ACPI: Added _OSI(Processor Device) May 15 00:28:06.907615 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 00:28:06.907623 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 00:28:06.907631 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 00:28:06.907638 kernel: ACPI: Interpreter enabled May 15 00:28:06.907646 kernel: ACPI: Using GIC for interrupt routing May 15 00:28:06.907653 kernel: ACPI: MCFG table detected, 1 entries May 15 00:28:06.907661 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 15 00:28:06.907668 kernel: printk: console [ttyAMA0] enabled May 15 00:28:06.907677 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 00:28:06.907821 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 00:28:06.907904 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 15 00:28:06.907976 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 15 00:28:06.908049 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 15 00:28:06.908121 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 15 00:28:06.908131 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 15 00:28:06.908143 kernel: PCI host bridge to bus 0000:00 May 15 00:28:06.908223 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 15 00:28:06.908291 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 15 00:28:06.908367 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 15 00:28:06.908432 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 00:28:06.908524 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 15 00:28:06.908707 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 15 00:28:06.908790 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 15 00:28:06.908862 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 15 00:28:06.908936 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 15 00:28:06.909009 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 15 00:28:06.909082 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 15 00:28:06.909155 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 15 00:28:06.909222 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 15 00:28:06.909290 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 15 00:28:06.909364 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 15 00:28:06.909375 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 15 00:28:06.909383 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 15 00:28:06.909391 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 15 00:28:06.909399 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 15 00:28:06.909407 kernel: iommu: Default domain type: Translated May 15 00:28:06.909415 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 15 00:28:06.909425 kernel: efivars: Registered efivars operations May 15 00:28:06.909433 kernel: vgaarb: loaded May 15 00:28:06.909440 kernel: clocksource: Switched to clocksource arch_sys_counter May 15 00:28:06.909448 kernel: VFS: Disk quotas dquot_6.6.0 May 15 00:28:06.909455 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 00:28:06.909463 kernel: pnp: PnP ACPI init May 15 00:28:06.909560 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 15 00:28:06.909572 kernel: pnp: PnP ACPI: found 1 devices May 15 00:28:06.909583 kernel: NET: Registered PF_INET protocol family May 15 00:28:06.909591 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 00:28:06.909599 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 00:28:06.909606 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 00:28:06.909614 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 00:28:06.909622 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 00:28:06.909630 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 00:28:06.909638 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 00:28:06.909645 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 00:28:06.909655 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 00:28:06.909662 kernel: PCI: CLS 0 bytes, default 64 May 15 00:28:06.909670 kernel: kvm [1]: HYP mode not available May 15 00:28:06.909677 kernel: Initialise system trusted keyrings May 15 00:28:06.909685 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 00:28:06.909692 kernel: Key type asymmetric registered May 15 00:28:06.909700 kernel: Asymmetric key parser 'x509' registered May 15 00:28:06.909708 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 15 00:28:06.909716 kernel: io scheduler mq-deadline registered May 15 00:28:06.909725 kernel: io scheduler kyber registered May 15 00:28:06.909733 kernel: io scheduler bfq registered May 15 00:28:06.909741 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 15 00:28:06.909749 kernel: ACPI: button: Power Button [PWRB] May 15 00:28:06.909757 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 15 00:28:06.909836 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 15 00:28:06.909847 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 00:28:06.909854 kernel: thunder_xcv, ver 1.0 May 15 00:28:06.909862 kernel: thunder_bgx, ver 1.0 May 15 00:28:06.909872 kernel: nicpf, ver 1.0 May 15 00:28:06.909880 kernel: nicvf, ver 1.0 May 15 00:28:06.909960 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 15 00:28:06.910031 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-15T00:28:06 UTC (1747268886) May 15 00:28:06.910041 kernel: hid: raw HID events driver (C) Jiri Kosina May 15 00:28:06.910049 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 15 00:28:06.910057 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 15 00:28:06.910065 kernel: watchdog: Hard watchdog permanently disabled May 15 00:28:06.910074 kernel: NET: Registered PF_INET6 protocol family May 15 00:28:06.910082 kernel: Segment Routing with IPv6 May 15 00:28:06.910089 kernel: In-situ OAM (IOAM) with IPv6 May 15 00:28:06.910097 kernel: NET: Registered PF_PACKET protocol family May 15 00:28:06.910104 kernel: Key type dns_resolver registered May 15 00:28:06.910112 kernel: registered taskstats version 1 May 15 00:28:06.910120 kernel: Loading compiled-in X.509 certificates May 15 00:28:06.910127 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 6afb3c096bffb4980a4bcc170ebe3729821d8e0d' May 15 00:28:06.910135 kernel: Key type .fscrypt registered May 15 00:28:06.910145 kernel: Key type fscrypt-provisioning registered May 15 00:28:06.910153 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 00:28:06.910160 kernel: ima: Allocated hash algorithm: sha1 May 15 00:28:06.910168 kernel: ima: No architecture policies found May 15 00:28:06.910176 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 15 00:28:06.910184 kernel: clk: Disabling unused clocks May 15 00:28:06.910196 kernel: Freeing unused kernel memory: 39424K May 15 00:28:06.910205 kernel: Run /init as init process May 15 00:28:06.910213 kernel: with arguments: May 15 00:28:06.910222 kernel: /init May 15 00:28:06.910231 kernel: with environment: May 15 00:28:06.910239 kernel: HOME=/ May 15 00:28:06.910247 kernel: TERM=linux May 15 00:28:06.910254 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 00:28:06.910264 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 15 00:28:06.910274 systemd[1]: Detected virtualization kvm. May 15 00:28:06.910283 systemd[1]: Detected architecture arm64. May 15 00:28:06.910293 systemd[1]: Running in initrd. May 15 00:28:06.910307 systemd[1]: No hostname configured, using default hostname. May 15 00:28:06.910317 systemd[1]: Hostname set to . May 15 00:28:06.910326 systemd[1]: Initializing machine ID from VM UUID. May 15 00:28:06.910334 systemd[1]: Queued start job for default target initrd.target. May 15 00:28:06.910343 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:28:06.910351 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:28:06.910360 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 00:28:06.910370 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 00:28:06.910378 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 00:28:06.910387 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 00:28:06.910397 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 00:28:06.910405 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 00:28:06.910413 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:28:06.910423 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 00:28:06.910432 systemd[1]: Reached target paths.target - Path Units. May 15 00:28:06.910440 systemd[1]: Reached target slices.target - Slice Units. May 15 00:28:06.910448 systemd[1]: Reached target swap.target - Swaps. May 15 00:28:06.910457 systemd[1]: Reached target timers.target - Timer Units. May 15 00:28:06.910465 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 00:28:06.910474 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 00:28:06.910482 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 00:28:06.910491 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 15 00:28:06.910501 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 00:28:06.910509 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 00:28:06.910518 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:28:06.910526 systemd[1]: Reached target sockets.target - Socket Units. May 15 00:28:06.910534 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 00:28:06.910551 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 00:28:06.910560 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 00:28:06.910568 systemd[1]: Starting systemd-fsck-usr.service... May 15 00:28:06.910576 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 00:28:06.910586 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 00:28:06.910595 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:28:06.910603 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 00:28:06.910612 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:28:06.910620 systemd[1]: Finished systemd-fsck-usr.service. May 15 00:28:06.910629 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 00:28:06.910639 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:28:06.910648 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:28:06.910676 systemd-journald[237]: Collecting audit messages is disabled. May 15 00:28:06.910698 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 00:28:06.910707 systemd-journald[237]: Journal started May 15 00:28:06.910726 systemd-journald[237]: Runtime Journal (/run/log/journal/892ec915db1b47b88fe91ca47c95b7f2) is 5.9M, max 47.3M, 41.4M free. May 15 00:28:06.920670 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 00:28:06.920699 kernel: Bridge firewalling registered May 15 00:28:06.901448 systemd-modules-load[238]: Inserted module 'overlay' May 15 00:28:06.922761 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 00:28:06.918503 systemd-modules-load[238]: Inserted module 'br_netfilter' May 15 00:28:06.926334 systemd[1]: Started systemd-journald.service - Journal Service. May 15 00:28:06.926806 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 00:28:06.929168 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:28:06.933199 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 00:28:06.934808 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 00:28:06.939899 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:28:06.942044 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 00:28:06.944795 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:28:06.947854 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 00:28:06.951683 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 00:28:06.964068 dracut-cmdline[268]: dracut-dracut-053 May 15 00:28:06.967369 dracut-cmdline[268]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3ad4d2a855aaa69496d8c2bf8d7e3c4212e29ec2df18e8282fb10689c3032596 May 15 00:28:06.986751 systemd-resolved[275]: Positive Trust Anchors: May 15 00:28:06.986770 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 00:28:06.986803 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 00:28:06.993026 systemd-resolved[275]: Defaulting to hostname 'linux'. May 15 00:28:06.996773 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 00:28:06.997901 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 00:28:07.037575 kernel: SCSI subsystem initialized May 15 00:28:07.042561 kernel: Loading iSCSI transport class v2.0-870. May 15 00:28:07.050562 kernel: iscsi: registered transport (tcp) May 15 00:28:07.063597 kernel: iscsi: registered transport (qla4xxx) May 15 00:28:07.063653 kernel: QLogic iSCSI HBA Driver May 15 00:28:07.107196 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 00:28:07.123740 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 00:28:07.140569 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 00:28:07.140621 kernel: device-mapper: uevent: version 1.0.3 May 15 00:28:07.140638 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 15 00:28:07.190581 kernel: raid6: neonx8 gen() 15764 MB/s May 15 00:28:07.207572 kernel: raid6: neonx4 gen() 15625 MB/s May 15 00:28:07.224567 kernel: raid6: neonx2 gen() 13171 MB/s May 15 00:28:07.241565 kernel: raid6: neonx1 gen() 10458 MB/s May 15 00:28:07.258566 kernel: raid6: int64x8 gen() 6938 MB/s May 15 00:28:07.275566 kernel: raid6: int64x4 gen() 7335 MB/s May 15 00:28:07.292565 kernel: raid6: int64x2 gen() 6118 MB/s May 15 00:28:07.309754 kernel: raid6: int64x1 gen() 5046 MB/s May 15 00:28:07.309769 kernel: raid6: using algorithm neonx8 gen() 15764 MB/s May 15 00:28:07.327691 kernel: raid6: .... xor() 11899 MB/s, rmw enabled May 15 00:28:07.327708 kernel: raid6: using neon recovery algorithm May 15 00:28:07.332566 kernel: xor: measuring software checksum speed May 15 00:28:07.333959 kernel: 8regs : 17181 MB/sec May 15 00:28:07.333972 kernel: 32regs : 18860 MB/sec May 15 00:28:07.334666 kernel: arm64_neon : 26910 MB/sec May 15 00:28:07.334679 kernel: xor: using function: arm64_neon (26910 MB/sec) May 15 00:28:07.385571 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 00:28:07.396554 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 00:28:07.408747 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:28:07.420573 systemd-udevd[456]: Using default interface naming scheme 'v255'. May 15 00:28:07.423763 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:28:07.430705 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 00:28:07.442981 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation May 15 00:28:07.470759 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 00:28:07.488756 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 00:28:07.528395 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:28:07.534861 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 00:28:07.549625 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 00:28:07.553273 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 00:28:07.555295 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:28:07.557727 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 00:28:07.566990 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 00:28:07.576851 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 15 00:28:07.583440 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 00:28:07.584467 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 00:28:07.588867 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 00:28:07.591805 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 00:28:07.591830 kernel: GPT:9289727 != 19775487 May 15 00:28:07.591840 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 00:28:07.591850 kernel: GPT:9289727 != 19775487 May 15 00:28:07.591868 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 00:28:07.591878 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:28:07.588987 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:28:07.594509 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:28:07.595685 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:28:07.595845 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:28:07.598078 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:28:07.610054 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:28:07.616964 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (504) May 15 00:28:07.617006 kernel: BTRFS: device fsid c82d3215-8134-4516-8c53-9d29a8823a8c devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (521) May 15 00:28:07.623836 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 15 00:28:07.625354 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:28:07.634687 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 15 00:28:07.639473 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 00:28:07.643617 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 15 00:28:07.644899 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 15 00:28:07.668713 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 00:28:07.670651 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:28:07.677409 disk-uuid[551]: Primary Header is updated. May 15 00:28:07.677409 disk-uuid[551]: Secondary Entries is updated. May 15 00:28:07.677409 disk-uuid[551]: Secondary Header is updated. May 15 00:28:07.681880 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:28:07.694262 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:28:08.692515 disk-uuid[552]: The operation has completed successfully. May 15 00:28:08.693959 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:28:08.714947 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 00:28:08.715047 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 00:28:08.738785 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 00:28:08.741745 sh[573]: Success May 15 00:28:08.756584 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 15 00:28:08.786537 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 00:28:08.794992 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 00:28:08.796852 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 00:28:08.807088 kernel: BTRFS info (device dm-0): first mount of filesystem c82d3215-8134-4516-8c53-9d29a8823a8c May 15 00:28:08.807130 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 15 00:28:08.807149 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 15 00:28:08.809011 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 15 00:28:08.809029 kernel: BTRFS info (device dm-0): using free space tree May 15 00:28:08.814494 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 00:28:08.815718 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 00:28:08.828715 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 00:28:08.830507 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 00:28:08.839374 kernel: BTRFS info (device vda6): first mount of filesystem 472de571-4852-412e-83c6-4e5fddef810b May 15 00:28:08.839422 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 00:28:08.839434 kernel: BTRFS info (device vda6): using free space tree May 15 00:28:08.841568 kernel: BTRFS info (device vda6): auto enabling async discard May 15 00:28:08.849435 systemd[1]: mnt-oem.mount: Deactivated successfully. May 15 00:28:08.851362 kernel: BTRFS info (device vda6): last unmount of filesystem 472de571-4852-412e-83c6-4e5fddef810b May 15 00:28:08.856714 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 00:28:08.866730 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 00:28:08.929681 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 00:28:08.941697 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 00:28:08.969332 systemd-networkd[766]: lo: Link UP May 15 00:28:08.969341 systemd-networkd[766]: lo: Gained carrier May 15 00:28:08.970067 systemd-networkd[766]: Enumeration completed May 15 00:28:08.970148 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 00:28:08.970501 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:28:08.970505 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:28:08.971291 systemd-networkd[766]: eth0: Link UP May 15 00:28:08.971303 systemd-networkd[766]: eth0: Gained carrier May 15 00:28:08.978444 ignition[664]: Ignition 2.19.0 May 15 00:28:08.971310 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:28:08.978450 ignition[664]: Stage: fetch-offline May 15 00:28:08.971882 systemd[1]: Reached target network.target - Network. May 15 00:28:08.978494 ignition[664]: no configs at "/usr/lib/ignition/base.d" May 15 00:28:08.984584 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.112/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 00:28:08.978503 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:28:08.978671 ignition[664]: parsed url from cmdline: "" May 15 00:28:08.978674 ignition[664]: no config URL provided May 15 00:28:08.978678 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" May 15 00:28:08.978685 ignition[664]: no config at "/usr/lib/ignition/user.ign" May 15 00:28:08.978709 ignition[664]: op(1): [started] loading QEMU firmware config module May 15 00:28:08.978714 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 00:28:08.990262 ignition[664]: op(1): [finished] loading QEMU firmware config module May 15 00:28:09.028664 ignition[664]: parsing config with SHA512: 422c6b75ef6237c72388fa87e57867905e47d314b61103a200b55615d66c6b33d9dba2c24ad0f0676950a11ae3312482534feee84d7e2e9bc5dd18122561280f May 15 00:28:09.032686 unknown[664]: fetched base config from "system" May 15 00:28:09.032697 unknown[664]: fetched user config from "qemu" May 15 00:28:09.034561 ignition[664]: fetch-offline: fetch-offline passed May 15 00:28:09.034654 ignition[664]: Ignition finished successfully May 15 00:28:09.036119 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 00:28:09.037498 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 00:28:09.044718 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 00:28:09.055805 ignition[772]: Ignition 2.19.0 May 15 00:28:09.055815 ignition[772]: Stage: kargs May 15 00:28:09.056021 ignition[772]: no configs at "/usr/lib/ignition/base.d" May 15 00:28:09.056030 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:28:09.057340 ignition[772]: kargs: kargs passed May 15 00:28:09.057398 ignition[772]: Ignition finished successfully May 15 00:28:09.061599 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 00:28:09.070688 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 00:28:09.080068 ignition[780]: Ignition 2.19.0 May 15 00:28:09.080077 ignition[780]: Stage: disks May 15 00:28:09.080257 ignition[780]: no configs at "/usr/lib/ignition/base.d" May 15 00:28:09.080267 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:28:09.082569 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 00:28:09.081117 ignition[780]: disks: disks passed May 15 00:28:09.081159 ignition[780]: Ignition finished successfully May 15 00:28:09.086246 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 00:28:09.087485 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 00:28:09.089523 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 00:28:09.091470 systemd[1]: Reached target sysinit.target - System Initialization. May 15 00:28:09.093247 systemd[1]: Reached target basic.target - Basic System. May 15 00:28:09.104694 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 00:28:09.115419 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 15 00:28:09.119038 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 00:28:09.121206 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 00:28:09.164470 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 00:28:09.166072 kernel: EXT4-fs (vda9): mounted filesystem 5a01cbd3-e7cb-4475-87b3-07e348161203 r/w with ordered data mode. Quota mode: none. May 15 00:28:09.165808 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 00:28:09.178649 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 00:28:09.181028 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 00:28:09.182059 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 00:28:09.182100 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 00:28:09.182121 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 00:28:09.188195 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 00:28:09.190757 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 00:28:09.196925 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (798) May 15 00:28:09.196974 kernel: BTRFS info (device vda6): first mount of filesystem 472de571-4852-412e-83c6-4e5fddef810b May 15 00:28:09.196986 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 00:28:09.196996 kernel: BTRFS info (device vda6): using free space tree May 15 00:28:09.198553 kernel: BTRFS info (device vda6): auto enabling async discard May 15 00:28:09.199785 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 00:28:09.236815 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory May 15 00:28:09.241442 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory May 15 00:28:09.244925 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory May 15 00:28:09.248204 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory May 15 00:28:09.348061 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 00:28:09.358944 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 00:28:09.361760 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 00:28:09.368579 kernel: BTRFS info (device vda6): last unmount of filesystem 472de571-4852-412e-83c6-4e5fddef810b May 15 00:28:09.392204 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 00:28:09.396857 ignition[911]: INFO : Ignition 2.19.0 May 15 00:28:09.396857 ignition[911]: INFO : Stage: mount May 15 00:28:09.399748 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:28:09.399748 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:28:09.399748 ignition[911]: INFO : mount: mount passed May 15 00:28:09.399748 ignition[911]: INFO : Ignition finished successfully May 15 00:28:09.399884 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 00:28:09.411740 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 00:28:09.805818 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 00:28:09.817762 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 00:28:09.824565 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (925) May 15 00:28:09.827574 kernel: BTRFS info (device vda6): first mount of filesystem 472de571-4852-412e-83c6-4e5fddef810b May 15 00:28:09.827622 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 00:28:09.827634 kernel: BTRFS info (device vda6): using free space tree May 15 00:28:09.834574 kernel: BTRFS info (device vda6): auto enabling async discard May 15 00:28:09.836076 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 00:28:09.872255 ignition[942]: INFO : Ignition 2.19.0 May 15 00:28:09.872255 ignition[942]: INFO : Stage: files May 15 00:28:09.872255 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:28:09.872255 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:28:09.872255 ignition[942]: DEBUG : files: compiled without relabeling support, skipping May 15 00:28:09.878152 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 00:28:09.878152 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 00:28:09.882140 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 00:28:09.882140 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 00:28:09.882140 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 00:28:09.881407 unknown[942]: wrote ssh authorized keys file for user: core May 15 00:28:09.887279 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 15 00:28:09.887279 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 15 00:28:10.023255 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 00:28:10.228152 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 15 00:28:10.228152 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 15 00:28:10.232391 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 15 00:28:10.232391 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 00:28:10.232391 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 00:28:10.232391 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 00:28:10.232391 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 00:28:10.232391 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 00:28:10.232391 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 00:28:10.232391 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 00:28:10.232391 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 00:28:10.232391 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 15 00:28:10.232391 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 15 00:28:10.232391 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 15 00:28:10.232391 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 15 00:28:10.532730 systemd-networkd[766]: eth0: Gained IPv6LL May 15 00:28:10.608949 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 15 00:28:11.045257 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 15 00:28:11.045257 ignition[942]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 15 00:28:11.049539 ignition[942]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 00:28:11.049539 ignition[942]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 00:28:11.049539 ignition[942]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 15 00:28:11.049539 ignition[942]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 15 00:28:11.049539 ignition[942]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 00:28:11.049539 ignition[942]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 00:28:11.049539 ignition[942]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 15 00:28:11.049539 ignition[942]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 15 00:28:11.073505 ignition[942]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 00:28:11.101448 ignition[942]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 00:28:11.103141 ignition[942]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 15 00:28:11.103141 ignition[942]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 15 00:28:11.103141 ignition[942]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 15 00:28:11.103141 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 00:28:11.103141 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 00:28:11.103141 ignition[942]: INFO : files: files passed May 15 00:28:11.103141 ignition[942]: INFO : Ignition finished successfully May 15 00:28:11.104994 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 00:28:11.122733 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 00:28:11.125446 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 00:28:11.127153 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 00:28:11.128565 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 00:28:11.134517 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory May 15 00:28:11.137721 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 00:28:11.137721 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 00:28:11.141266 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 00:28:11.142860 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 00:28:11.144251 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 00:28:11.156757 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 00:28:11.176971 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 00:28:11.177083 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 00:28:11.179365 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 00:28:11.180438 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 00:28:11.183012 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 00:28:11.183944 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 00:28:11.200162 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 00:28:11.216809 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 00:28:11.224784 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 00:28:11.226019 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:28:11.228057 systemd[1]: Stopped target timers.target - Timer Units. May 15 00:28:11.229831 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 00:28:11.229955 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 00:28:11.232507 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 00:28:11.233620 systemd[1]: Stopped target basic.target - Basic System. May 15 00:28:11.235462 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 00:28:11.237383 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 00:28:11.239151 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 00:28:11.241058 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 00:28:11.242995 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 00:28:11.245005 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 00:28:11.246703 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 00:28:11.248647 systemd[1]: Stopped target swap.target - Swaps. May 15 00:28:11.250205 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 00:28:11.250340 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 00:28:11.252654 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 00:28:11.254573 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:28:11.256628 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 00:28:11.257620 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:28:11.259762 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 00:28:11.259886 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 00:28:11.262685 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 00:28:11.262798 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 00:28:11.264813 systemd[1]: Stopped target paths.target - Path Units. May 15 00:28:11.266447 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 00:28:11.266559 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:28:11.268512 systemd[1]: Stopped target slices.target - Slice Units. May 15 00:28:11.270363 systemd[1]: Stopped target sockets.target - Socket Units. May 15 00:28:11.271979 systemd[1]: iscsid.socket: Deactivated successfully. May 15 00:28:11.272074 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 00:28:11.273760 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 00:28:11.273839 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 00:28:11.276001 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 00:28:11.276108 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 00:28:11.277810 systemd[1]: ignition-files.service: Deactivated successfully. May 15 00:28:11.277905 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 00:28:11.289765 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 00:28:11.290660 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 00:28:11.290801 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:28:11.295758 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 00:28:11.296634 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 00:28:11.296767 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:28:11.306500 ignition[996]: INFO : Ignition 2.19.0 May 15 00:28:11.306500 ignition[996]: INFO : Stage: umount May 15 00:28:11.306500 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:28:11.306500 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:28:11.306500 ignition[996]: INFO : umount: umount passed May 15 00:28:11.306500 ignition[996]: INFO : Ignition finished successfully May 15 00:28:11.300349 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 00:28:11.300453 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 00:28:11.304174 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 00:28:11.304260 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 00:28:11.308416 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 00:28:11.308506 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 00:28:11.311363 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 00:28:11.312123 systemd[1]: Stopped target network.target - Network. May 15 00:28:11.313372 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 00:28:11.313447 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 00:28:11.315070 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 00:28:11.315117 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 00:28:11.316793 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 00:28:11.316835 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 00:28:11.318710 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 00:28:11.318754 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 00:28:11.320854 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 00:28:11.322636 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 00:28:11.329104 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 00:28:11.329233 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 00:28:11.329585 systemd-networkd[766]: eth0: DHCPv6 lease lost May 15 00:28:11.331458 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 00:28:11.331592 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 00:28:11.334119 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 00:28:11.334169 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 00:28:11.344648 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 00:28:11.345688 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 00:28:11.345753 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 00:28:11.347735 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 00:28:11.347779 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 00:28:11.349664 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 00:28:11.349708 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 00:28:11.351425 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 00:28:11.351468 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:28:11.354230 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:28:11.362888 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 00:28:11.363006 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 00:28:11.367744 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 00:28:11.367836 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 00:28:11.370990 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 00:28:11.371099 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:28:11.373427 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 00:28:11.373483 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 00:28:11.374630 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 00:28:11.374662 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:28:11.376340 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 00:28:11.376387 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 00:28:11.378866 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 00:28:11.378910 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 00:28:11.381843 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 00:28:11.381887 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:28:11.383986 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 00:28:11.384033 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 00:28:11.399706 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 00:28:11.400744 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 00:28:11.400809 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:28:11.402962 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:28:11.403005 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:28:11.405185 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 00:28:11.405267 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 00:28:11.407376 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 00:28:11.409508 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 00:28:11.419259 systemd[1]: Switching root. May 15 00:28:11.442743 systemd-journald[237]: Journal stopped May 15 00:28:12.163435 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). May 15 00:28:12.163490 kernel: SELinux: policy capability network_peer_controls=1 May 15 00:28:12.163503 kernel: SELinux: policy capability open_perms=1 May 15 00:28:12.163516 kernel: SELinux: policy capability extended_socket_class=1 May 15 00:28:12.163526 kernel: SELinux: policy capability always_check_network=0 May 15 00:28:12.163536 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 00:28:12.163575 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 00:28:12.163690 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 00:28:12.163706 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 00:28:12.163721 kernel: audit: type=1403 audit(1747268891.574:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 00:28:12.163733 systemd[1]: Successfully loaded SELinux policy in 33.840ms. May 15 00:28:12.163756 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.242ms. May 15 00:28:12.163771 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 15 00:28:12.163783 systemd[1]: Detected virtualization kvm. May 15 00:28:12.163797 systemd[1]: Detected architecture arm64. May 15 00:28:12.163808 systemd[1]: Detected first boot. May 15 00:28:12.163818 systemd[1]: Initializing machine ID from VM UUID. May 15 00:28:12.163829 zram_generator::config[1040]: No configuration found. May 15 00:28:12.163841 systemd[1]: Populated /etc with preset unit settings. May 15 00:28:12.163851 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 00:28:12.163864 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 00:28:12.163875 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 00:28:12.163886 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 00:28:12.163897 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 00:28:12.163908 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 00:28:12.163918 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 00:28:12.163929 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 00:28:12.163940 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 00:28:12.163953 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 00:28:12.163963 systemd[1]: Created slice user.slice - User and Session Slice. May 15 00:28:12.163974 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:28:12.163985 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:28:12.163996 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 00:28:12.164007 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 00:28:12.164018 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 00:28:12.164030 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 00:28:12.164040 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 15 00:28:12.164053 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:28:12.164063 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 00:28:12.164074 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 00:28:12.164085 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 00:28:12.164096 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 00:28:12.164107 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:28:12.164118 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 00:28:12.164128 systemd[1]: Reached target slices.target - Slice Units. May 15 00:28:12.164141 systemd[1]: Reached target swap.target - Swaps. May 15 00:28:12.164151 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 00:28:12.164162 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 00:28:12.164173 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 00:28:12.164184 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 00:28:12.164194 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:28:12.164205 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 00:28:12.164216 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 00:28:12.164226 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 00:28:12.164238 systemd[1]: Mounting media.mount - External Media Directory... May 15 00:28:12.164249 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 00:28:12.164261 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 00:28:12.164272 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 00:28:12.164291 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 00:28:12.164302 systemd[1]: Reached target machines.target - Containers. May 15 00:28:12.164313 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 00:28:12.164324 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:28:12.164335 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 00:28:12.164348 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 00:28:12.164359 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:28:12.164370 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 00:28:12.164381 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:28:12.164391 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 00:28:12.164402 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:28:12.164413 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 00:28:12.164424 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 00:28:12.164436 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 00:28:12.164447 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 00:28:12.164457 systemd[1]: Stopped systemd-fsck-usr.service. May 15 00:28:12.164468 kernel: fuse: init (API version 7.39) May 15 00:28:12.164478 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 00:28:12.164488 kernel: loop: module loaded May 15 00:28:12.164499 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 00:28:12.164510 kernel: ACPI: bus type drm_connector registered May 15 00:28:12.164520 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 00:28:12.164533 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 00:28:12.164555 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 00:28:12.164590 systemd-journald[1105]: Collecting audit messages is disabled. May 15 00:28:12.164613 systemd[1]: verity-setup.service: Deactivated successfully. May 15 00:28:12.164625 systemd-journald[1105]: Journal started May 15 00:28:12.164647 systemd-journald[1105]: Runtime Journal (/run/log/journal/892ec915db1b47b88fe91ca47c95b7f2) is 5.9M, max 47.3M, 41.4M free. May 15 00:28:12.164691 systemd[1]: Stopped verity-setup.service. May 15 00:28:11.948789 systemd[1]: Queued start job for default target multi-user.target. May 15 00:28:11.970305 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 15 00:28:11.970703 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 00:28:12.169689 systemd[1]: Started systemd-journald.service - Journal Service. May 15 00:28:12.170361 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 00:28:12.171706 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 00:28:12.172949 systemd[1]: Mounted media.mount - External Media Directory. May 15 00:28:12.174160 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 00:28:12.175464 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 00:28:12.176831 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 00:28:12.178085 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:28:12.179740 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 00:28:12.179952 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 00:28:12.182047 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:28:12.182347 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:28:12.184739 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 00:28:12.184922 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 00:28:12.186967 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 00:28:12.188497 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:28:12.188654 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:28:12.190141 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 00:28:12.190267 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 00:28:12.191754 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:28:12.191879 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:28:12.193381 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 00:28:12.194835 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 00:28:12.196412 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 00:28:12.208904 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 00:28:12.219673 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 00:28:12.222044 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 00:28:12.223202 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 00:28:12.223255 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 00:28:12.225602 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 15 00:28:12.227977 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 00:28:12.230267 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 00:28:12.231520 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:28:12.233041 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 00:28:12.238740 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 00:28:12.239980 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:28:12.243736 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 00:28:12.244965 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 00:28:12.248742 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 00:28:12.253467 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 00:28:12.253669 systemd-journald[1105]: Time spent on flushing to /var/log/journal/892ec915db1b47b88fe91ca47c95b7f2 is 21.853ms for 854 entries. May 15 00:28:12.253669 systemd-journald[1105]: System Journal (/var/log/journal/892ec915db1b47b88fe91ca47c95b7f2) is 8.0M, max 195.6M, 187.6M free. May 15 00:28:12.290824 systemd-journald[1105]: Received client request to flush runtime journal. May 15 00:28:12.290897 kernel: loop0: detected capacity change from 0 to 189592 May 15 00:28:12.290932 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 00:28:12.262785 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 00:28:12.265732 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:28:12.267662 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 00:28:12.269985 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 00:28:12.277895 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 00:28:12.283014 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 00:28:12.289268 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 00:28:12.304817 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 15 00:28:12.307430 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 15 00:28:12.309019 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 00:28:12.310769 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 00:28:12.319568 kernel: loop1: detected capacity change from 0 to 114432 May 15 00:28:12.334204 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 00:28:12.335082 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 15 00:28:12.338047 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 15 00:28:12.340348 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 00:28:12.347868 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 00:28:12.363381 kernel: loop2: detected capacity change from 0 to 114328 May 15 00:28:12.374243 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. May 15 00:28:12.374261 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. May 15 00:28:12.378688 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:28:12.406589 kernel: loop3: detected capacity change from 0 to 189592 May 15 00:28:12.411568 kernel: loop4: detected capacity change from 0 to 114432 May 15 00:28:12.416579 kernel: loop5: detected capacity change from 0 to 114328 May 15 00:28:12.418975 (sd-merge)[1177]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 15 00:28:12.419383 (sd-merge)[1177]: Merged extensions into '/usr'. May 15 00:28:12.425130 systemd[1]: Reloading requested from client PID 1151 ('systemd-sysext') (unit systemd-sysext.service)... May 15 00:28:12.425151 systemd[1]: Reloading... May 15 00:28:12.486701 zram_generator::config[1202]: No configuration found. May 15 00:28:12.548575 ldconfig[1146]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 00:28:12.592260 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:28:12.628956 systemd[1]: Reloading finished in 203 ms. May 15 00:28:12.659867 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 00:28:12.661458 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 00:28:12.672871 systemd[1]: Starting ensure-sysext.service... May 15 00:28:12.674784 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 00:28:12.686227 systemd[1]: Reloading requested from client PID 1239 ('systemctl') (unit ensure-sysext.service)... May 15 00:28:12.686245 systemd[1]: Reloading... May 15 00:28:12.694415 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 00:28:12.694694 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 00:28:12.695329 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 00:28:12.695568 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. May 15 00:28:12.695623 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. May 15 00:28:12.697885 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. May 15 00:28:12.697899 systemd-tmpfiles[1240]: Skipping /boot May 15 00:28:12.704856 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. May 15 00:28:12.704873 systemd-tmpfiles[1240]: Skipping /boot May 15 00:28:12.729575 zram_generator::config[1267]: No configuration found. May 15 00:28:12.819532 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:28:12.856059 systemd[1]: Reloading finished in 169 ms. May 15 00:28:12.870682 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 00:28:12.882138 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:28:12.890948 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 15 00:28:12.893681 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 00:28:12.896370 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 00:28:12.899912 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 00:28:12.904830 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:28:12.912901 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 00:28:12.917709 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:28:12.920997 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:28:12.929044 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:28:12.932722 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:28:12.936808 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:28:12.937916 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 00:28:12.939803 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:28:12.939931 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:28:12.941787 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:28:12.942005 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:28:12.950163 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:28:12.951319 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:28:12.957065 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:28:12.975905 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:28:12.978269 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:28:12.979953 systemd-udevd[1312]: Using default interface naming scheme 'v255'. May 15 00:28:12.983959 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:28:12.985257 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:28:12.987047 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 00:28:12.995713 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 00:28:12.999645 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 00:28:13.001853 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:28:13.001987 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:28:13.003810 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:28:13.003952 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:28:13.005893 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:28:13.006016 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:28:13.007686 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 00:28:13.011536 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 00:28:13.015889 augenrules[1336]: No rules May 15 00:28:13.016687 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 15 00:28:13.018272 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:28:13.027573 systemd[1]: Finished ensure-sysext.service. May 15 00:28:13.032166 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:28:13.043888 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:28:13.049770 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 00:28:13.056479 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:28:13.061886 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:28:13.065046 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:28:13.071932 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 00:28:13.080795 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 00:28:13.082114 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 00:28:13.082603 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 00:28:13.084228 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:28:13.084393 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:28:13.086525 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 00:28:13.086682 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 00:28:13.088894 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:28:13.089046 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:28:13.090902 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:28:13.091044 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:28:13.097796 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 15 00:28:13.101572 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1353) May 15 00:28:13.109874 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:28:13.109957 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 00:28:13.149108 systemd-resolved[1308]: Positive Trust Anchors: May 15 00:28:13.149130 systemd-resolved[1308]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 00:28:13.149163 systemd-resolved[1308]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 00:28:13.157151 systemd-resolved[1308]: Defaulting to hostname 'linux'. May 15 00:28:13.162966 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 00:28:13.165357 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 00:28:13.166666 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 00:28:13.180934 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 00:28:13.183181 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 00:28:13.184557 systemd[1]: Reached target time-set.target - System Time Set. May 15 00:28:13.203930 systemd-networkd[1376]: lo: Link UP May 15 00:28:13.204182 systemd-networkd[1376]: lo: Gained carrier May 15 00:28:13.205150 systemd-networkd[1376]: Enumeration completed May 15 00:28:13.206644 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:28:13.206737 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:28:13.209800 systemd-networkd[1376]: eth0: Link UP May 15 00:28:13.209870 systemd-networkd[1376]: eth0: Gained carrier May 15 00:28:13.209937 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:28:13.216816 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:28:13.218201 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 00:28:13.222250 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 00:28:13.225637 systemd-networkd[1376]: eth0: DHCPv4 address 10.0.0.112/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 00:28:13.226785 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. May 15 00:28:12.748563 systemd-timesyncd[1377]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 00:28:12.828478 systemd-journald[1105]: Time jumped backwards, rotating. May 15 00:28:12.748572 systemd-resolved[1308]: Clock change detected. Flushing caches. May 15 00:28:12.828640 lvm[1400]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 00:28:12.748614 systemd-timesyncd[1377]: Initial clock synchronization to Thu 2025-05-15 00:28:12.748462 UTC. May 15 00:28:12.749357 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 15 00:28:12.754399 systemd[1]: Reached target network.target - Network. May 15 00:28:12.764964 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 15 00:28:12.767294 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 00:28:12.821470 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:28:12.858378 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 15 00:28:12.860002 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 00:28:12.861885 systemd[1]: Reached target sysinit.target - System Initialization. May 15 00:28:12.863108 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 00:28:12.864433 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 00:28:12.865874 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 00:28:12.867063 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 00:28:12.868346 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 00:28:12.869648 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 00:28:12.869688 systemd[1]: Reached target paths.target - Path Units. May 15 00:28:12.870639 systemd[1]: Reached target timers.target - Timer Units. May 15 00:28:12.899003 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 00:28:12.901820 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 00:28:12.910906 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 00:28:12.913985 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 15 00:28:12.915765 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 00:28:12.916957 systemd[1]: Reached target sockets.target - Socket Units. May 15 00:28:12.917911 systemd[1]: Reached target basic.target - Basic System. May 15 00:28:12.918929 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 00:28:12.918963 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 00:28:12.919894 systemd[1]: Starting containerd.service - containerd container runtime... May 15 00:28:12.921910 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 00:28:12.922860 lvm[1409]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 00:28:12.924026 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 00:28:12.927124 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 00:28:12.929134 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 00:28:12.945973 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 00:28:12.955322 jq[1412]: false May 15 00:28:12.953334 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 00:28:12.955873 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 00:28:12.959194 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 00:28:12.962600 extend-filesystems[1413]: Found loop3 May 15 00:28:12.962600 extend-filesystems[1413]: Found loop4 May 15 00:28:12.962600 extend-filesystems[1413]: Found loop5 May 15 00:28:12.962600 extend-filesystems[1413]: Found vda May 15 00:28:12.962600 extend-filesystems[1413]: Found vda1 May 15 00:28:12.962600 extend-filesystems[1413]: Found vda2 May 15 00:28:12.962600 extend-filesystems[1413]: Found vda3 May 15 00:28:12.962600 extend-filesystems[1413]: Found usr May 15 00:28:12.962600 extend-filesystems[1413]: Found vda4 May 15 00:28:12.962600 extend-filesystems[1413]: Found vda6 May 15 00:28:12.962600 extend-filesystems[1413]: Found vda7 May 15 00:28:12.962600 extend-filesystems[1413]: Found vda9 May 15 00:28:12.962600 extend-filesystems[1413]: Checking size of /dev/vda9 May 15 00:28:12.984122 extend-filesystems[1413]: Resized partition /dev/vda9 May 15 00:28:12.975905 dbus-daemon[1411]: [system] SELinux support is enabled May 15 00:28:12.966013 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 00:28:12.969440 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 00:28:12.972438 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 00:28:12.973997 systemd[1]: Starting update-engine.service - Update Engine... May 15 00:28:12.976657 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 00:28:12.978939 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 00:28:12.983927 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 15 00:28:12.990546 extend-filesystems[1434]: resize2fs 1.47.1 (20-May-2024) May 15 00:28:13.005877 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1367) May 15 00:28:13.005905 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 00:28:12.993272 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 00:28:13.006035 jq[1431]: true May 15 00:28:12.993425 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 00:28:12.993698 systemd[1]: motdgen.service: Deactivated successfully. May 15 00:28:12.993882 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 00:28:13.002238 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 00:28:13.002391 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 00:28:13.020280 (ntainerd)[1440]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 00:28:13.022885 jq[1439]: true May 15 00:28:13.028784 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 00:28:13.036214 tar[1436]: linux-arm64/helm May 15 00:28:13.037433 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 00:28:13.037473 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 00:28:13.041030 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 00:28:13.041061 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 00:28:13.041101 systemd-logind[1420]: Watching system buttons on /dev/input/event0 (Power Button) May 15 00:28:13.041303 systemd-logind[1420]: New seat seat0. May 15 00:28:13.042730 extend-filesystems[1434]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 00:28:13.042730 extend-filesystems[1434]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 00:28:13.042730 extend-filesystems[1434]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 00:28:13.052024 extend-filesystems[1413]: Resized filesystem in /dev/vda9 May 15 00:28:13.042933 systemd[1]: Started systemd-logind.service - User Login Management. May 15 00:28:13.049650 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 00:28:13.052180 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 00:28:13.088569 update_engine[1426]: I20250515 00:28:13.088356 1426 main.cc:92] Flatcar Update Engine starting May 15 00:28:13.091777 update_engine[1426]: I20250515 00:28:13.090204 1426 update_check_scheduler.cc:74] Next update check in 3m21s May 15 00:28:13.090443 systemd[1]: Started update-engine.service - Update Engine. May 15 00:28:13.097060 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 00:28:13.124157 bash[1467]: Updated "/home/core/.ssh/authorized_keys" May 15 00:28:13.129394 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 00:28:13.133286 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 15 00:28:13.152882 locksmithd[1466]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 00:28:13.238297 containerd[1440]: time="2025-05-15T00:28:13.238203203Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 15 00:28:13.265120 containerd[1440]: time="2025-05-15T00:28:13.265025123Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 00:28:13.266376 containerd[1440]: time="2025-05-15T00:28:13.266342323Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 00:28:13.266864 containerd[1440]: time="2025-05-15T00:28:13.266447043Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 00:28:13.266864 containerd[1440]: time="2025-05-15T00:28:13.266472923Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 00:28:13.266864 containerd[1440]: time="2025-05-15T00:28:13.266647843Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 15 00:28:13.266864 containerd[1440]: time="2025-05-15T00:28:13.266665803Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 15 00:28:13.266864 containerd[1440]: time="2025-05-15T00:28:13.266717323Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:28:13.266864 containerd[1440]: time="2025-05-15T00:28:13.266730763Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 00:28:13.267293 containerd[1440]: time="2025-05-15T00:28:13.267206243Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:28:13.267360 containerd[1440]: time="2025-05-15T00:28:13.267346443Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 00:28:13.267413 containerd[1440]: time="2025-05-15T00:28:13.267398083Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:28:13.267517 containerd[1440]: time="2025-05-15T00:28:13.267492083Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 00:28:13.267723 containerd[1440]: time="2025-05-15T00:28:13.267651643Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 00:28:13.268137 containerd[1440]: time="2025-05-15T00:28:13.268115683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 00:28:13.268395 containerd[1440]: time="2025-05-15T00:28:13.268371883Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:28:13.268458 containerd[1440]: time="2025-05-15T00:28:13.268444323Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 00:28:13.268646 containerd[1440]: time="2025-05-15T00:28:13.268626683Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 00:28:13.268960 containerd[1440]: time="2025-05-15T00:28:13.268812483Z" level=info msg="metadata content store policy set" policy=shared May 15 00:28:13.272261 containerd[1440]: time="2025-05-15T00:28:13.271846843Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 00:28:13.272261 containerd[1440]: time="2025-05-15T00:28:13.271902483Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 00:28:13.272261 containerd[1440]: time="2025-05-15T00:28:13.271918683Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 15 00:28:13.272261 containerd[1440]: time="2025-05-15T00:28:13.271939883Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 15 00:28:13.272261 containerd[1440]: time="2025-05-15T00:28:13.271954963Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 00:28:13.272261 containerd[1440]: time="2025-05-15T00:28:13.272084963Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 00:28:13.272619 containerd[1440]: time="2025-05-15T00:28:13.272595723Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 00:28:13.272871 containerd[1440]: time="2025-05-15T00:28:13.272849883Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 15 00:28:13.273114 containerd[1440]: time="2025-05-15T00:28:13.272993763Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 15 00:28:13.273114 containerd[1440]: time="2025-05-15T00:28:13.273015563Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 15 00:28:13.273114 containerd[1440]: time="2025-05-15T00:28:13.273032483Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 00:28:13.273114 containerd[1440]: time="2025-05-15T00:28:13.273045603Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 00:28:13.273114 containerd[1440]: time="2025-05-15T00:28:13.273058243Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 00:28:13.273240 containerd[1440]: time="2025-05-15T00:28:13.273223763Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 00:28:13.273295 containerd[1440]: time="2025-05-15T00:28:13.273282643Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 00:28:13.273914 containerd[1440]: time="2025-05-15T00:28:13.273396803Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 00:28:13.273914 containerd[1440]: time="2025-05-15T00:28:13.273418043Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 00:28:13.273914 containerd[1440]: time="2025-05-15T00:28:13.273430683Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 00:28:13.273914 containerd[1440]: time="2025-05-15T00:28:13.273451963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 00:28:13.273914 containerd[1440]: time="2025-05-15T00:28:13.273472643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 00:28:13.273914 containerd[1440]: time="2025-05-15T00:28:13.273485163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 00:28:13.273914 containerd[1440]: time="2025-05-15T00:28:13.273497523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 00:28:13.273914 containerd[1440]: time="2025-05-15T00:28:13.273521123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 00:28:13.273914 containerd[1440]: time="2025-05-15T00:28:13.273537443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 00:28:13.273914 containerd[1440]: time="2025-05-15T00:28:13.273550163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 00:28:13.273914 containerd[1440]: time="2025-05-15T00:28:13.273564843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 00:28:13.273914 containerd[1440]: time="2025-05-15T00:28:13.273577643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 15 00:28:13.273914 containerd[1440]: time="2025-05-15T00:28:13.273600083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 15 00:28:13.273914 containerd[1440]: time="2025-05-15T00:28:13.273612323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 00:28:13.274198 containerd[1440]: time="2025-05-15T00:28:13.273624283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 15 00:28:13.274198 containerd[1440]: time="2025-05-15T00:28:13.273637163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 00:28:13.274198 containerd[1440]: time="2025-05-15T00:28:13.273653123Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 15 00:28:13.274198 containerd[1440]: time="2025-05-15T00:28:13.273676523Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 15 00:28:13.274198 containerd[1440]: time="2025-05-15T00:28:13.273687763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 00:28:13.274198 containerd[1440]: time="2025-05-15T00:28:13.273699963Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 00:28:13.274992 containerd[1440]: time="2025-05-15T00:28:13.274965483Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 00:28:13.275080 containerd[1440]: time="2025-05-15T00:28:13.275062723Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 15 00:28:13.275821 containerd[1440]: time="2025-05-15T00:28:13.275154803Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 00:28:13.275821 containerd[1440]: time="2025-05-15T00:28:13.275175283Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 15 00:28:13.275821 containerd[1440]: time="2025-05-15T00:28:13.275185643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 00:28:13.275821 containerd[1440]: time="2025-05-15T00:28:13.275204003Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 15 00:28:13.275821 containerd[1440]: time="2025-05-15T00:28:13.275213963Z" level=info msg="NRI interface is disabled by configuration." May 15 00:28:13.275821 containerd[1440]: time="2025-05-15T00:28:13.275224443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 00:28:13.275943 containerd[1440]: time="2025-05-15T00:28:13.275498763Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 00:28:13.275943 containerd[1440]: time="2025-05-15T00:28:13.275565963Z" level=info msg="Connect containerd service" May 15 00:28:13.275943 containerd[1440]: time="2025-05-15T00:28:13.275662803Z" level=info msg="using legacy CRI server" May 15 00:28:13.275943 containerd[1440]: time="2025-05-15T00:28:13.275669803Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 00:28:13.275943 containerd[1440]: time="2025-05-15T00:28:13.275752723Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 00:28:13.277102 containerd[1440]: time="2025-05-15T00:28:13.277075483Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 00:28:13.277399 containerd[1440]: time="2025-05-15T00:28:13.277329323Z" level=info msg="Start subscribing containerd event" May 15 00:28:13.278981 containerd[1440]: time="2025-05-15T00:28:13.278948923Z" level=info msg="Start recovering state" May 15 00:28:13.279060 containerd[1440]: time="2025-05-15T00:28:13.279041963Z" level=info msg="Start event monitor" May 15 00:28:13.279086 containerd[1440]: time="2025-05-15T00:28:13.279059003Z" level=info msg="Start snapshots syncer" May 15 00:28:13.279086 containerd[1440]: time="2025-05-15T00:28:13.279071043Z" level=info msg="Start cni network conf syncer for default" May 15 00:28:13.279086 containerd[1440]: time="2025-05-15T00:28:13.279084003Z" level=info msg="Start streaming server" May 15 00:28:13.279730 containerd[1440]: time="2025-05-15T00:28:13.279669283Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 00:28:13.279730 containerd[1440]: time="2025-05-15T00:28:13.279720283Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 00:28:13.279880 containerd[1440]: time="2025-05-15T00:28:13.279790403Z" level=info msg="containerd successfully booted in 0.044368s" May 15 00:28:13.281880 systemd[1]: Started containerd.service - containerd container runtime. May 15 00:28:13.406728 tar[1436]: linux-arm64/LICENSE May 15 00:28:13.406728 tar[1436]: linux-arm64/README.md May 15 00:28:13.417249 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 00:28:13.478385 sshd_keygen[1433]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 00:28:13.498054 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 00:28:13.512050 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 00:28:13.518842 systemd[1]: issuegen.service: Deactivated successfully. May 15 00:28:13.519033 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 00:28:13.521663 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 00:28:13.533436 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 00:28:13.536385 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 00:28:13.538515 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 15 00:28:13.539983 systemd[1]: Reached target getty.target - Login Prompts. May 15 00:28:14.533941 systemd-networkd[1376]: eth0: Gained IPv6LL May 15 00:28:14.536961 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 00:28:14.541164 systemd[1]: Reached target network-online.target - Network is Online. May 15 00:28:14.550053 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 15 00:28:14.552708 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:28:14.554963 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 00:28:14.570691 systemd[1]: coreos-metadata.service: Deactivated successfully. May 15 00:28:14.570996 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 15 00:28:14.572760 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 00:28:14.580441 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 00:28:15.045757 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:28:15.047224 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 00:28:15.048942 systemd[1]: Startup finished in 587ms (kernel) + 4.865s (initrd) + 3.989s (userspace) = 9.442s. May 15 00:28:15.050465 (kubelet)[1524]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:28:15.501736 kubelet[1524]: E0515 00:28:15.501624 1524 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:28:15.503844 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:28:15.504013 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:28:18.587590 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 00:28:18.588657 systemd[1]: Started sshd@0-10.0.0.112:22-10.0.0.1:36402.service - OpenSSH per-connection server daemon (10.0.0.1:36402). May 15 00:28:18.648101 sshd[1538]: Accepted publickey for core from 10.0.0.1 port 36402 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:28:18.651417 sshd[1538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:28:18.658862 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 00:28:18.676102 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 00:28:18.677835 systemd-logind[1420]: New session 1 of user core. May 15 00:28:18.685780 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 00:28:18.699104 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 00:28:18.701718 (systemd)[1542]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 00:28:18.785593 systemd[1542]: Queued start job for default target default.target. May 15 00:28:18.797730 systemd[1542]: Created slice app.slice - User Application Slice. May 15 00:28:18.797762 systemd[1542]: Reached target paths.target - Paths. May 15 00:28:18.797794 systemd[1542]: Reached target timers.target - Timers. May 15 00:28:18.799101 systemd[1542]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 00:28:18.810912 systemd[1542]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 00:28:18.811019 systemd[1542]: Reached target sockets.target - Sockets. May 15 00:28:18.811033 systemd[1542]: Reached target basic.target - Basic System. May 15 00:28:18.811072 systemd[1542]: Reached target default.target - Main User Target. May 15 00:28:18.811100 systemd[1542]: Startup finished in 102ms. May 15 00:28:18.811253 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 00:28:18.812535 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 00:28:18.874005 systemd[1]: Started sshd@1-10.0.0.112:22-10.0.0.1:36416.service - OpenSSH per-connection server daemon (10.0.0.1:36416). May 15 00:28:18.917470 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 36416 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:28:18.918906 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:28:18.922822 systemd-logind[1420]: New session 2 of user core. May 15 00:28:18.931924 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 00:28:18.984489 sshd[1553]: pam_unix(sshd:session): session closed for user core May 15 00:28:18.997516 systemd[1]: sshd@1-10.0.0.112:22-10.0.0.1:36416.service: Deactivated successfully. May 15 00:28:18.999229 systemd[1]: session-2.scope: Deactivated successfully. May 15 00:28:19.001960 systemd-logind[1420]: Session 2 logged out. Waiting for processes to exit. May 15 00:28:19.020156 systemd[1]: Started sshd@2-10.0.0.112:22-10.0.0.1:36426.service - OpenSSH per-connection server daemon (10.0.0.1:36426). May 15 00:28:19.021166 systemd-logind[1420]: Removed session 2. May 15 00:28:19.054546 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 36426 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:28:19.055841 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:28:19.059837 systemd-logind[1420]: New session 3 of user core. May 15 00:28:19.065947 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 00:28:19.113491 sshd[1560]: pam_unix(sshd:session): session closed for user core May 15 00:28:19.127104 systemd[1]: sshd@2-10.0.0.112:22-10.0.0.1:36426.service: Deactivated successfully. May 15 00:28:19.128466 systemd[1]: session-3.scope: Deactivated successfully. May 15 00:28:19.129954 systemd-logind[1420]: Session 3 logged out. Waiting for processes to exit. May 15 00:28:19.131543 systemd[1]: Started sshd@3-10.0.0.112:22-10.0.0.1:36432.service - OpenSSH per-connection server daemon (10.0.0.1:36432). May 15 00:28:19.132304 systemd-logind[1420]: Removed session 3. May 15 00:28:19.169911 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 36432 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:28:19.171073 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:28:19.174742 systemd-logind[1420]: New session 4 of user core. May 15 00:28:19.182928 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 00:28:19.234012 sshd[1567]: pam_unix(sshd:session): session closed for user core May 15 00:28:19.243047 systemd[1]: sshd@3-10.0.0.112:22-10.0.0.1:36432.service: Deactivated successfully. May 15 00:28:19.244386 systemd[1]: session-4.scope: Deactivated successfully. May 15 00:28:19.245579 systemd-logind[1420]: Session 4 logged out. Waiting for processes to exit. May 15 00:28:19.246668 systemd[1]: Started sshd@4-10.0.0.112:22-10.0.0.1:36448.service - OpenSSH per-connection server daemon (10.0.0.1:36448). May 15 00:28:19.247433 systemd-logind[1420]: Removed session 4. May 15 00:28:19.284947 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 36448 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:28:19.286379 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:28:19.290994 systemd-logind[1420]: New session 5 of user core. May 15 00:28:19.299949 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 00:28:19.358040 sudo[1577]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 00:28:19.358291 sudo[1577]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:28:19.373503 sudo[1577]: pam_unix(sudo:session): session closed for user root May 15 00:28:19.375176 sshd[1574]: pam_unix(sshd:session): session closed for user core May 15 00:28:19.385064 systemd[1]: sshd@4-10.0.0.112:22-10.0.0.1:36448.service: Deactivated successfully. May 15 00:28:19.386519 systemd[1]: session-5.scope: Deactivated successfully. May 15 00:28:19.388761 systemd-logind[1420]: Session 5 logged out. Waiting for processes to exit. May 15 00:28:19.389955 systemd[1]: Started sshd@5-10.0.0.112:22-10.0.0.1:36462.service - OpenSSH per-connection server daemon (10.0.0.1:36462). May 15 00:28:19.390654 systemd-logind[1420]: Removed session 5. May 15 00:28:19.428525 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 36462 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:28:19.429815 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:28:19.433630 systemd-logind[1420]: New session 6 of user core. May 15 00:28:19.446920 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 00:28:19.498160 sudo[1586]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 00:28:19.498433 sudo[1586]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:28:19.501259 sudo[1586]: pam_unix(sudo:session): session closed for user root May 15 00:28:19.506588 sudo[1585]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 15 00:28:19.506868 sudo[1585]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:28:19.529012 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 15 00:28:19.530290 auditctl[1589]: No rules May 15 00:28:19.531111 systemd[1]: audit-rules.service: Deactivated successfully. May 15 00:28:19.531858 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 15 00:28:19.533471 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 15 00:28:19.556163 augenrules[1607]: No rules May 15 00:28:19.557309 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 15 00:28:19.558528 sudo[1585]: pam_unix(sudo:session): session closed for user root May 15 00:28:19.560443 sshd[1582]: pam_unix(sshd:session): session closed for user core May 15 00:28:19.572011 systemd[1]: sshd@5-10.0.0.112:22-10.0.0.1:36462.service: Deactivated successfully. May 15 00:28:19.573313 systemd[1]: session-6.scope: Deactivated successfully. May 15 00:28:19.574447 systemd-logind[1420]: Session 6 logged out. Waiting for processes to exit. May 15 00:28:19.575575 systemd[1]: Started sshd@6-10.0.0.112:22-10.0.0.1:36466.service - OpenSSH per-connection server daemon (10.0.0.1:36466). May 15 00:28:19.576294 systemd-logind[1420]: Removed session 6. May 15 00:28:19.613938 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 36466 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:28:19.615227 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:28:19.619274 systemd-logind[1420]: New session 7 of user core. May 15 00:28:19.626913 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 00:28:19.676726 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 00:28:19.677356 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:28:19.983980 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 00:28:19.984116 (dockerd)[1635]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 00:28:20.231202 dockerd[1635]: time="2025-05-15T00:28:20.231135163Z" level=info msg="Starting up" May 15 00:28:20.370250 dockerd[1635]: time="2025-05-15T00:28:20.369881003Z" level=info msg="Loading containers: start." May 15 00:28:20.471544 kernel: Initializing XFRM netlink socket May 15 00:28:20.530351 systemd-networkd[1376]: docker0: Link UP May 15 00:28:20.548500 dockerd[1635]: time="2025-05-15T00:28:20.547945123Z" level=info msg="Loading containers: done." May 15 00:28:20.562932 dockerd[1635]: time="2025-05-15T00:28:20.562814763Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 00:28:20.562932 dockerd[1635]: time="2025-05-15T00:28:20.562914483Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 15 00:28:20.563082 dockerd[1635]: time="2025-05-15T00:28:20.563007283Z" level=info msg="Daemon has completed initialization" May 15 00:28:20.595215 dockerd[1635]: time="2025-05-15T00:28:20.595044683Z" level=info msg="API listen on /run/docker.sock" May 15 00:28:20.595299 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 00:28:21.236989 containerd[1440]: time="2025-05-15T00:28:21.236686443Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 15 00:28:21.836250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1156543734.mount: Deactivated successfully. May 15 00:28:23.285826 containerd[1440]: time="2025-05-15T00:28:23.285174843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:23.286747 containerd[1440]: time="2025-05-15T00:28:23.286466123Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554610" May 15 00:28:23.287477 containerd[1440]: time="2025-05-15T00:28:23.287442723Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:23.292361 containerd[1440]: time="2025-05-15T00:28:23.292313403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:23.293660 containerd[1440]: time="2025-05-15T00:28:23.293619643Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 2.05689324s" May 15 00:28:23.293725 containerd[1440]: time="2025-05-15T00:28:23.293660523Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 15 00:28:23.294640 containerd[1440]: time="2025-05-15T00:28:23.294604323Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 15 00:28:24.528192 containerd[1440]: time="2025-05-15T00:28:24.528140523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:24.528589 containerd[1440]: time="2025-05-15T00:28:24.528562683Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458980" May 15 00:28:24.529790 containerd[1440]: time="2025-05-15T00:28:24.529436043Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:24.532948 containerd[1440]: time="2025-05-15T00:28:24.532910403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:24.534889 containerd[1440]: time="2025-05-15T00:28:24.534856163Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 1.24013616s" May 15 00:28:24.535069 containerd[1440]: time="2025-05-15T00:28:24.534979723Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 15 00:28:24.535472 containerd[1440]: time="2025-05-15T00:28:24.535430203Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 15 00:28:25.747505 containerd[1440]: time="2025-05-15T00:28:25.747436963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:25.747925 containerd[1440]: time="2025-05-15T00:28:25.747883203Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125815" May 15 00:28:25.748687 containerd[1440]: time="2025-05-15T00:28:25.748655643Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:25.751387 containerd[1440]: time="2025-05-15T00:28:25.751334883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:25.753574 containerd[1440]: time="2025-05-15T00:28:25.753241923Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.21778628s" May 15 00:28:25.754285 containerd[1440]: time="2025-05-15T00:28:25.754143403Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 15 00:28:25.754629 containerd[1440]: time="2025-05-15T00:28:25.754605963Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 15 00:28:25.754726 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 00:28:25.761931 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:28:25.856556 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:28:25.860144 (kubelet)[1856]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:28:25.911612 kubelet[1856]: E0515 00:28:25.911556 1856 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:28:25.914336 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:28:25.914487 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:28:26.852814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2843880156.mount: Deactivated successfully. May 15 00:28:27.197095 containerd[1440]: time="2025-05-15T00:28:27.196981363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:27.197583 containerd[1440]: time="2025-05-15T00:28:27.197541843Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871919" May 15 00:28:27.198694 containerd[1440]: time="2025-05-15T00:28:27.198655283Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:27.200746 containerd[1440]: time="2025-05-15T00:28:27.200707563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:27.201610 containerd[1440]: time="2025-05-15T00:28:27.201568923Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.4469304s" May 15 00:28:27.201610 containerd[1440]: time="2025-05-15T00:28:27.201604083Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 15 00:28:27.202412 containerd[1440]: time="2025-05-15T00:28:27.202309843Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 00:28:27.773842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2292583398.mount: Deactivated successfully. May 15 00:28:28.300316 containerd[1440]: time="2025-05-15T00:28:28.300257363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:28.301157 containerd[1440]: time="2025-05-15T00:28:28.301101763Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 15 00:28:28.301782 containerd[1440]: time="2025-05-15T00:28:28.301742123Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:28.305066 containerd[1440]: time="2025-05-15T00:28:28.305027803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:28.307643 containerd[1440]: time="2025-05-15T00:28:28.307603843Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.10525868s" May 15 00:28:28.307689 containerd[1440]: time="2025-05-15T00:28:28.307644123Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 15 00:28:28.308360 containerd[1440]: time="2025-05-15T00:28:28.308336123Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 00:28:28.801941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1430950548.mount: Deactivated successfully. May 15 00:28:28.805842 containerd[1440]: time="2025-05-15T00:28:28.805797363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:28.806357 containerd[1440]: time="2025-05-15T00:28:28.806323323Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 15 00:28:28.807096 containerd[1440]: time="2025-05-15T00:28:28.807065203Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:28.811038 containerd[1440]: time="2025-05-15T00:28:28.809321403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:28.811038 containerd[1440]: time="2025-05-15T00:28:28.810736523Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 502.36904ms" May 15 00:28:28.811038 containerd[1440]: time="2025-05-15T00:28:28.810763563Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 15 00:28:28.811261 containerd[1440]: time="2025-05-15T00:28:28.811231403Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 15 00:28:29.355891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3349804058.mount: Deactivated successfully. May 15 00:28:31.758308 containerd[1440]: time="2025-05-15T00:28:31.758260363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:31.759325 containerd[1440]: time="2025-05-15T00:28:31.759292323Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" May 15 00:28:31.759994 containerd[1440]: time="2025-05-15T00:28:31.759959243Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:31.763369 containerd[1440]: time="2025-05-15T00:28:31.763307043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:31.764979 containerd[1440]: time="2025-05-15T00:28:31.764827603Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.9535616s" May 15 00:28:31.764979 containerd[1440]: time="2025-05-15T00:28:31.764864803Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 15 00:28:36.164858 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 00:28:36.172935 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:28:36.265501 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:28:36.268866 (kubelet)[2006]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:28:36.306699 kubelet[2006]: E0515 00:28:36.306636 2006 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:28:36.310004 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:28:36.310148 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:28:36.461393 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:28:36.473987 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:28:36.494002 systemd[1]: Reloading requested from client PID 2022 ('systemctl') (unit session-7.scope)... May 15 00:28:36.494019 systemd[1]: Reloading... May 15 00:28:36.562819 zram_generator::config[2062]: No configuration found. May 15 00:28:36.775600 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:28:36.828650 systemd[1]: Reloading finished in 334 ms. May 15 00:28:36.873581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:28:36.875818 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:28:36.877820 systemd[1]: kubelet.service: Deactivated successfully. May 15 00:28:36.878838 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:28:36.885063 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:28:36.973395 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:28:36.978389 (kubelet)[2108]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 00:28:37.011249 kubelet[2108]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:28:37.011249 kubelet[2108]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 00:28:37.011249 kubelet[2108]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:28:37.011578 kubelet[2108]: I0515 00:28:37.011419 2108 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:28:37.751884 kubelet[2108]: I0515 00:28:37.751841 2108 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 15 00:28:37.751884 kubelet[2108]: I0515 00:28:37.751873 2108 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:28:37.752126 kubelet[2108]: I0515 00:28:37.752101 2108 server.go:929] "Client rotation is on, will bootstrap in background" May 15 00:28:37.794115 kubelet[2108]: E0515 00:28:37.794068 2108 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.112:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" May 15 00:28:37.795052 kubelet[2108]: I0515 00:28:37.794907 2108 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:28:37.802235 kubelet[2108]: E0515 00:28:37.802195 2108 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 00:28:37.802235 kubelet[2108]: I0515 00:28:37.802226 2108 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 00:28:37.805662 kubelet[2108]: I0515 00:28:37.805639 2108 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:28:37.805941 kubelet[2108]: I0515 00:28:37.805915 2108 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 00:28:37.806063 kubelet[2108]: I0515 00:28:37.806030 2108 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:28:37.806215 kubelet[2108]: I0515 00:28:37.806052 2108 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 00:28:37.806352 kubelet[2108]: I0515 00:28:37.806340 2108 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:28:37.806352 kubelet[2108]: I0515 00:28:37.806352 2108 container_manager_linux.go:300] "Creating device plugin manager" May 15 00:28:37.806546 kubelet[2108]: I0515 00:28:37.806533 2108 state_mem.go:36] "Initialized new in-memory state store" May 15 00:28:37.808570 kubelet[2108]: I0515 00:28:37.808137 2108 kubelet.go:408] "Attempting to sync node with API server" May 15 00:28:37.808570 kubelet[2108]: I0515 00:28:37.808164 2108 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:28:37.808570 kubelet[2108]: I0515 00:28:37.808255 2108 kubelet.go:314] "Adding apiserver pod source" May 15 00:28:37.808570 kubelet[2108]: I0515 00:28:37.808265 2108 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:28:37.811107 kubelet[2108]: W0515 00:28:37.811061 2108 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused May 15 00:28:37.812042 kubelet[2108]: E0515 00:28:37.812009 2108 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" May 15 00:28:37.812165 kubelet[2108]: W0515 00:28:37.811936 2108 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused May 15 00:28:37.812254 kubelet[2108]: E0515 00:28:37.812238 2108 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" May 15 00:28:37.812904 kubelet[2108]: I0515 00:28:37.812886 2108 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 15 00:28:37.814679 kubelet[2108]: I0515 00:28:37.814654 2108 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:28:37.815343 kubelet[2108]: W0515 00:28:37.815315 2108 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 00:28:37.816677 kubelet[2108]: I0515 00:28:37.815999 2108 server.go:1269] "Started kubelet" May 15 00:28:37.816677 kubelet[2108]: I0515 00:28:37.816630 2108 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:28:37.817146 kubelet[2108]: I0515 00:28:37.817101 2108 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:28:37.817911 kubelet[2108]: I0515 00:28:37.817432 2108 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:28:37.817981 kubelet[2108]: I0515 00:28:37.817953 2108 server.go:460] "Adding debug handlers to kubelet server" May 15 00:28:37.818503 kubelet[2108]: I0515 00:28:37.818479 2108 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:28:37.818822 kubelet[2108]: I0515 00:28:37.818706 2108 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 00:28:37.819433 kubelet[2108]: I0515 00:28:37.819214 2108 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 00:28:37.819433 kubelet[2108]: I0515 00:28:37.819321 2108 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 15 00:28:37.819433 kubelet[2108]: I0515 00:28:37.819361 2108 reconciler.go:26] "Reconciler: start to sync state" May 15 00:28:37.820489 kubelet[2108]: W0515 00:28:37.819717 2108 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused May 15 00:28:37.820489 kubelet[2108]: E0515 00:28:37.819757 2108 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" May 15 00:28:37.820489 kubelet[2108]: E0515 00:28:37.820101 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:28:37.820489 kubelet[2108]: E0515 00:28:37.818342 2108 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.112:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.112:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f8bd2c5807ccb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 00:28:37.815966923 +0000 UTC m=+0.834630761,LastTimestamp:2025-05-15 00:28:37.815966923 +0000 UTC m=+0.834630761,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 00:28:37.820489 kubelet[2108]: E0515 00:28:37.820466 2108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="200ms" May 15 00:28:37.821049 kubelet[2108]: I0515 00:28:37.821025 2108 factory.go:221] Registration of the systemd container factory successfully May 15 00:28:37.821111 kubelet[2108]: I0515 00:28:37.821100 2108 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:28:37.821428 kubelet[2108]: E0515 00:28:37.821409 2108 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 00:28:37.822941 kubelet[2108]: I0515 00:28:37.822912 2108 factory.go:221] Registration of the containerd container factory successfully May 15 00:28:37.837613 kubelet[2108]: I0515 00:28:37.837568 2108 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:28:37.839295 kubelet[2108]: I0515 00:28:37.839208 2108 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 00:28:37.839295 kubelet[2108]: I0515 00:28:37.839227 2108 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 00:28:37.839295 kubelet[2108]: I0515 00:28:37.839247 2108 state_mem.go:36] "Initialized new in-memory state store" May 15 00:28:37.839515 kubelet[2108]: I0515 00:28:37.839497 2108 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:28:37.839962 kubelet[2108]: I0515 00:28:37.839576 2108 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 00:28:37.839962 kubelet[2108]: I0515 00:28:37.839598 2108 kubelet.go:2321] "Starting kubelet main sync loop" May 15 00:28:37.839962 kubelet[2108]: E0515 00:28:37.839645 2108 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:28:37.840738 kubelet[2108]: W0515 00:28:37.840694 2108 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused May 15 00:28:37.840904 kubelet[2108]: E0515 00:28:37.840860 2108 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" May 15 00:28:37.902007 kubelet[2108]: I0515 00:28:37.901964 2108 policy_none.go:49] "None policy: Start" May 15 00:28:37.902838 kubelet[2108]: I0515 00:28:37.902813 2108 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 00:28:37.902914 kubelet[2108]: I0515 00:28:37.902845 2108 state_mem.go:35] "Initializing new in-memory state store" May 15 00:28:37.908267 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 00:28:37.920929 kubelet[2108]: E0515 00:28:37.920899 2108 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:28:37.924154 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 00:28:37.926556 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 00:28:37.936566 kubelet[2108]: I0515 00:28:37.936431 2108 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:28:37.936652 kubelet[2108]: I0515 00:28:37.936606 2108 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 00:28:37.936652 kubelet[2108]: I0515 00:28:37.936617 2108 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:28:37.936875 kubelet[2108]: I0515 00:28:37.936838 2108 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:28:37.938256 kubelet[2108]: E0515 00:28:37.938236 2108 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 00:28:37.946853 systemd[1]: Created slice kubepods-burstable-pod87535bec6648f22e83671a9105ff27ff.slice - libcontainer container kubepods-burstable-pod87535bec6648f22e83671a9105ff27ff.slice. May 15 00:28:37.957808 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 15 00:28:37.969984 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 15 00:28:38.021069 kubelet[2108]: E0515 00:28:38.020940 2108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="400ms" May 15 00:28:38.038553 kubelet[2108]: I0515 00:28:38.038520 2108 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 00:28:38.038945 kubelet[2108]: E0515 00:28:38.038905 2108 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" May 15 00:28:38.121262 kubelet[2108]: I0515 00:28:38.121207 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 15 00:28:38.121325 kubelet[2108]: I0515 00:28:38.121271 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:28:38.121325 kubelet[2108]: I0515 00:28:38.121306 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:28:38.121440 kubelet[2108]: I0515 00:28:38.121344 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/87535bec6648f22e83671a9105ff27ff-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"87535bec6648f22e83671a9105ff27ff\") " pod="kube-system/kube-apiserver-localhost" May 15 00:28:38.121440 kubelet[2108]: I0515 00:28:38.121370 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:28:38.121483 kubelet[2108]: I0515 00:28:38.121440 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:28:38.121483 kubelet[2108]: I0515 00:28:38.121461 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:28:38.121483 kubelet[2108]: I0515 00:28:38.121476 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/87535bec6648f22e83671a9105ff27ff-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"87535bec6648f22e83671a9105ff27ff\") " pod="kube-system/kube-apiserver-localhost" May 15 00:28:38.121543 kubelet[2108]: I0515 00:28:38.121509 2108 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/87535bec6648f22e83671a9105ff27ff-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"87535bec6648f22e83671a9105ff27ff\") " pod="kube-system/kube-apiserver-localhost" May 15 00:28:38.240868 kubelet[2108]: I0515 00:28:38.240848 2108 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 00:28:38.241119 kubelet[2108]: E0515 00:28:38.241093 2108 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" May 15 00:28:38.257446 kubelet[2108]: E0515 00:28:38.257417 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:38.258084 containerd[1440]: time="2025-05-15T00:28:38.257914003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:87535bec6648f22e83671a9105ff27ff,Namespace:kube-system,Attempt:0,}" May 15 00:28:38.269029 kubelet[2108]: E0515 00:28:38.268982 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:38.269463 containerd[1440]: time="2025-05-15T00:28:38.269435763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 15 00:28:38.271678 kubelet[2108]: E0515 00:28:38.271589 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:38.272071 containerd[1440]: time="2025-05-15T00:28:38.272035483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 15 00:28:38.422343 kubelet[2108]: E0515 00:28:38.422280 2108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="800ms" May 15 00:28:38.631865 kubelet[2108]: W0515 00:28:38.631660 2108 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused May 15 00:28:38.631865 kubelet[2108]: E0515 00:28:38.631732 2108 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" May 15 00:28:38.638377 kubelet[2108]: W0515 00:28:38.638307 2108 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused May 15 00:28:38.638377 kubelet[2108]: E0515 00:28:38.638372 2108 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" May 15 00:28:38.642560 kubelet[2108]: I0515 00:28:38.642522 2108 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 00:28:38.642850 kubelet[2108]: E0515 00:28:38.642806 2108 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" May 15 00:28:38.763505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4275622791.mount: Deactivated successfully. May 15 00:28:38.777794 kubelet[2108]: W0515 00:28:38.777680 2108 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused May 15 00:28:38.777794 kubelet[2108]: E0515 00:28:38.777728 2108 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" May 15 00:28:38.777962 containerd[1440]: time="2025-05-15T00:28:38.777916443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:28:38.779759 containerd[1440]: time="2025-05-15T00:28:38.779720563Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:28:38.780324 containerd[1440]: time="2025-05-15T00:28:38.780263803Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 15 00:28:38.781144 containerd[1440]: time="2025-05-15T00:28:38.781105883Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:28:38.781492 containerd[1440]: time="2025-05-15T00:28:38.781461643Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 00:28:38.782526 containerd[1440]: time="2025-05-15T00:28:38.782489443Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:28:38.782889 containerd[1440]: time="2025-05-15T00:28:38.782857843Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 00:28:38.784476 containerd[1440]: time="2025-05-15T00:28:38.784437883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:28:38.787349 containerd[1440]: time="2025-05-15T00:28:38.787316443Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 529.32628ms" May 15 00:28:38.788392 kubelet[2108]: W0515 00:28:38.788285 2108 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused May 15 00:28:38.788530 kubelet[2108]: E0515 00:28:38.788483 2108 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" May 15 00:28:38.788777 containerd[1440]: time="2025-05-15T00:28:38.788745323Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 516.54048ms" May 15 00:28:38.791111 containerd[1440]: time="2025-05-15T00:28:38.791080603Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 521.58124ms" May 15 00:28:38.938353 containerd[1440]: time="2025-05-15T00:28:38.937830523Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:28:38.938353 containerd[1440]: time="2025-05-15T00:28:38.937878003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:28:38.938353 containerd[1440]: time="2025-05-15T00:28:38.937893763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:28:38.938353 containerd[1440]: time="2025-05-15T00:28:38.938026083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:28:38.940669 containerd[1440]: time="2025-05-15T00:28:38.939588203Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:28:38.940669 containerd[1440]: time="2025-05-15T00:28:38.939720843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:28:38.940669 containerd[1440]: time="2025-05-15T00:28:38.939748283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:28:38.940669 containerd[1440]: time="2025-05-15T00:28:38.939940923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:28:38.941037 containerd[1440]: time="2025-05-15T00:28:38.940943963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:28:38.941037 containerd[1440]: time="2025-05-15T00:28:38.940992643Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:28:38.941037 containerd[1440]: time="2025-05-15T00:28:38.941016283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:28:38.941187 containerd[1440]: time="2025-05-15T00:28:38.941108323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:28:38.966939 systemd[1]: Started cri-containerd-41536254030c9ebaa2b56a50921ee0591da76f3f14f5ec6e32a385b4cb369771.scope - libcontainer container 41536254030c9ebaa2b56a50921ee0591da76f3f14f5ec6e32a385b4cb369771. May 15 00:28:38.967988 systemd[1]: Started cri-containerd-461e1e8c8fb4b722488a9f424957c9034a7291fd087fe6ecf0e86146f802e815.scope - libcontainer container 461e1e8c8fb4b722488a9f424957c9034a7291fd087fe6ecf0e86146f802e815. May 15 00:28:38.969751 systemd[1]: Started cri-containerd-8cc158b75c4fed11ff9e021471c31dd1cbf3b12e62d7fa1de87654c1e2a9daaf.scope - libcontainer container 8cc158b75c4fed11ff9e021471c31dd1cbf3b12e62d7fa1de87654c1e2a9daaf. May 15 00:28:39.001626 containerd[1440]: time="2025-05-15T00:28:39.001002163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"461e1e8c8fb4b722488a9f424957c9034a7291fd087fe6ecf0e86146f802e815\"" May 15 00:28:39.003294 kubelet[2108]: E0515 00:28:39.003230 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:39.005400 containerd[1440]: time="2025-05-15T00:28:39.005328643Z" level=info msg="CreateContainer within sandbox \"461e1e8c8fb4b722488a9f424957c9034a7291fd087fe6ecf0e86146f802e815\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 00:28:39.010154 containerd[1440]: time="2025-05-15T00:28:39.010123523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:87535bec6648f22e83671a9105ff27ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"41536254030c9ebaa2b56a50921ee0591da76f3f14f5ec6e32a385b4cb369771\"" May 15 00:28:39.014106 containerd[1440]: time="2025-05-15T00:28:39.013906043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8cc158b75c4fed11ff9e021471c31dd1cbf3b12e62d7fa1de87654c1e2a9daaf\"" May 15 00:28:39.014215 kubelet[2108]: E0515 00:28:39.013764 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:39.015814 kubelet[2108]: E0515 00:28:39.015691 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:39.019114 containerd[1440]: time="2025-05-15T00:28:39.019076563Z" level=info msg="CreateContainer within sandbox \"8cc158b75c4fed11ff9e021471c31dd1cbf3b12e62d7fa1de87654c1e2a9daaf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 00:28:39.019187 containerd[1440]: time="2025-05-15T00:28:39.019080123Z" level=info msg="CreateContainer within sandbox \"41536254030c9ebaa2b56a50921ee0591da76f3f14f5ec6e32a385b4cb369771\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 00:28:39.022809 containerd[1440]: time="2025-05-15T00:28:39.022751523Z" level=info msg="CreateContainer within sandbox \"461e1e8c8fb4b722488a9f424957c9034a7291fd087fe6ecf0e86146f802e815\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"55d2d17479b963dfd5853796bd58120281c59c0047ecfe55b6418fb405b5d95e\"" May 15 00:28:39.024429 containerd[1440]: time="2025-05-15T00:28:39.024350523Z" level=info msg="StartContainer for \"55d2d17479b963dfd5853796bd58120281c59c0047ecfe55b6418fb405b5d95e\"" May 15 00:28:39.039226 containerd[1440]: time="2025-05-15T00:28:39.039179363Z" level=info msg="CreateContainer within sandbox \"41536254030c9ebaa2b56a50921ee0591da76f3f14f5ec6e32a385b4cb369771\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"87b91486a7c6ae19d4dd7127125835df132703e0aae3eaf678895830bbfb3c6a\"" May 15 00:28:39.039605 containerd[1440]: time="2025-05-15T00:28:39.039567123Z" level=info msg="CreateContainer within sandbox \"8cc158b75c4fed11ff9e021471c31dd1cbf3b12e62d7fa1de87654c1e2a9daaf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"69854e433369345e54fa0ffd97add5ab32dbad3066610c6d165b1c7375afe25c\"" May 15 00:28:39.039829 containerd[1440]: time="2025-05-15T00:28:39.039804643Z" level=info msg="StartContainer for \"87b91486a7c6ae19d4dd7127125835df132703e0aae3eaf678895830bbfb3c6a\"" May 15 00:28:39.040010 containerd[1440]: time="2025-05-15T00:28:39.039961883Z" level=info msg="StartContainer for \"69854e433369345e54fa0ffd97add5ab32dbad3066610c6d165b1c7375afe25c\"" May 15 00:28:39.057945 systemd[1]: Started cri-containerd-55d2d17479b963dfd5853796bd58120281c59c0047ecfe55b6418fb405b5d95e.scope - libcontainer container 55d2d17479b963dfd5853796bd58120281c59c0047ecfe55b6418fb405b5d95e. May 15 00:28:39.080934 systemd[1]: Started cri-containerd-69854e433369345e54fa0ffd97add5ab32dbad3066610c6d165b1c7375afe25c.scope - libcontainer container 69854e433369345e54fa0ffd97add5ab32dbad3066610c6d165b1c7375afe25c. May 15 00:28:39.082193 systemd[1]: Started cri-containerd-87b91486a7c6ae19d4dd7127125835df132703e0aae3eaf678895830bbfb3c6a.scope - libcontainer container 87b91486a7c6ae19d4dd7127125835df132703e0aae3eaf678895830bbfb3c6a. May 15 00:28:39.127656 containerd[1440]: time="2025-05-15T00:28:39.127586363Z" level=info msg="StartContainer for \"55d2d17479b963dfd5853796bd58120281c59c0047ecfe55b6418fb405b5d95e\" returns successfully" May 15 00:28:39.158137 containerd[1440]: time="2025-05-15T00:28:39.158077483Z" level=info msg="StartContainer for \"87b91486a7c6ae19d4dd7127125835df132703e0aae3eaf678895830bbfb3c6a\" returns successfully" May 15 00:28:39.158267 containerd[1440]: time="2025-05-15T00:28:39.158159283Z" level=info msg="StartContainer for \"69854e433369345e54fa0ffd97add5ab32dbad3066610c6d165b1c7375afe25c\" returns successfully" May 15 00:28:39.223177 kubelet[2108]: E0515 00:28:39.223024 2108 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="1.6s" May 15 00:28:39.444609 kubelet[2108]: I0515 00:28:39.444566 2108 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 00:28:39.849806 kubelet[2108]: E0515 00:28:39.849627 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:39.850523 kubelet[2108]: E0515 00:28:39.850463 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:39.853976 kubelet[2108]: E0515 00:28:39.853919 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:40.857337 kubelet[2108]: E0515 00:28:40.856590 2108 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:41.383895 kubelet[2108]: E0515 00:28:41.383855 2108 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 15 00:28:41.474335 kubelet[2108]: I0515 00:28:41.473694 2108 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 15 00:28:41.813717 kubelet[2108]: I0515 00:28:41.813594 2108 apiserver.go:52] "Watching apiserver" May 15 00:28:41.820104 kubelet[2108]: I0515 00:28:41.820075 2108 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 15 00:28:43.451060 systemd[1]: Reloading requested from client PID 2385 ('systemctl') (unit session-7.scope)... May 15 00:28:43.451391 systemd[1]: Reloading... May 15 00:28:43.515804 zram_generator::config[2427]: No configuration found. May 15 00:28:43.598747 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:28:43.666186 systemd[1]: Reloading finished in 214 ms. May 15 00:28:43.706294 kubelet[2108]: I0515 00:28:43.706024 2108 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:28:43.706179 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:28:43.720731 systemd[1]: kubelet.service: Deactivated successfully. May 15 00:28:43.720956 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:28:43.721010 systemd[1]: kubelet.service: Consumed 1.190s CPU time, 114.6M memory peak, 0B memory swap peak. May 15 00:28:43.737091 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:28:43.824297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:28:43.828160 (kubelet)[2466]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 00:28:43.861899 kubelet[2466]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:28:43.861899 kubelet[2466]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 00:28:43.861899 kubelet[2466]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:28:43.861899 kubelet[2466]: I0515 00:28:43.861680 2466 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:28:43.872401 kubelet[2466]: I0515 00:28:43.872361 2466 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 15 00:28:43.872401 kubelet[2466]: I0515 00:28:43.872393 2466 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:28:43.872610 kubelet[2466]: I0515 00:28:43.872594 2466 server.go:929] "Client rotation is on, will bootstrap in background" May 15 00:28:43.873879 kubelet[2466]: I0515 00:28:43.873853 2466 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 00:28:43.875864 kubelet[2466]: I0515 00:28:43.875722 2466 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:28:43.878230 kubelet[2466]: E0515 00:28:43.878197 2466 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 00:28:43.878335 kubelet[2466]: I0515 00:28:43.878320 2466 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 00:28:43.880667 kubelet[2466]: I0515 00:28:43.880628 2466 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:28:43.880848 kubelet[2466]: I0515 00:28:43.880835 2466 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 00:28:43.881042 kubelet[2466]: I0515 00:28:43.881017 2466 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:28:43.881471 kubelet[2466]: I0515 00:28:43.881097 2466 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 00:28:43.881471 kubelet[2466]: I0515 00:28:43.881256 2466 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:28:43.881471 kubelet[2466]: I0515 00:28:43.881264 2466 container_manager_linux.go:300] "Creating device plugin manager" May 15 00:28:43.881471 kubelet[2466]: I0515 00:28:43.881291 2466 state_mem.go:36] "Initialized new in-memory state store" May 15 00:28:43.881471 kubelet[2466]: I0515 00:28:43.881401 2466 kubelet.go:408] "Attempting to sync node with API server" May 15 00:28:43.881633 kubelet[2466]: I0515 00:28:43.881414 2466 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:28:43.881633 kubelet[2466]: I0515 00:28:43.881432 2466 kubelet.go:314] "Adding apiserver pod source" May 15 00:28:43.881633 kubelet[2466]: I0515 00:28:43.881440 2466 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:28:43.883304 kubelet[2466]: I0515 00:28:43.883271 2466 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 15 00:28:43.883826 kubelet[2466]: I0515 00:28:43.883800 2466 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:28:43.884228 kubelet[2466]: I0515 00:28:43.884200 2466 server.go:1269] "Started kubelet" May 15 00:28:43.885116 kubelet[2466]: I0515 00:28:43.885085 2466 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:28:43.887879 kubelet[2466]: I0515 00:28:43.887856 2466 server.go:460] "Adding debug handlers to kubelet server" May 15 00:28:43.890431 kubelet[2466]: I0515 00:28:43.890281 2466 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:28:43.890938 kubelet[2466]: I0515 00:28:43.890915 2466 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 00:28:43.891050 kubelet[2466]: I0515 00:28:43.890831 2466 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:28:43.891214 kubelet[2466]: I0515 00:28:43.891196 2466 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:28:43.894641 kubelet[2466]: I0515 00:28:43.894617 2466 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 00:28:43.894897 kubelet[2466]: E0515 00:28:43.894874 2466 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:28:43.895843 kubelet[2466]: I0515 00:28:43.895106 2466 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 15 00:28:43.895843 kubelet[2466]: I0515 00:28:43.895228 2466 reconciler.go:26] "Reconciler: start to sync state" May 15 00:28:43.895843 kubelet[2466]: I0515 00:28:43.895337 2466 factory.go:221] Registration of the systemd container factory successfully May 15 00:28:43.895843 kubelet[2466]: I0515 00:28:43.895424 2466 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:28:43.903564 kubelet[2466]: I0515 00:28:43.903536 2466 factory.go:221] Registration of the containerd container factory successfully May 15 00:28:43.905916 kubelet[2466]: I0515 00:28:43.905883 2466 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:28:43.908579 kubelet[2466]: I0515 00:28:43.908552 2466 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:28:43.908579 kubelet[2466]: I0515 00:28:43.908577 2466 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 00:28:43.908795 kubelet[2466]: I0515 00:28:43.908593 2466 kubelet.go:2321] "Starting kubelet main sync loop" May 15 00:28:43.908795 kubelet[2466]: E0515 00:28:43.908633 2466 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:28:43.939707 kubelet[2466]: I0515 00:28:43.939680 2466 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 00:28:43.939707 kubelet[2466]: I0515 00:28:43.939699 2466 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 00:28:43.939707 kubelet[2466]: I0515 00:28:43.939719 2466 state_mem.go:36] "Initialized new in-memory state store" May 15 00:28:43.939876 kubelet[2466]: I0515 00:28:43.939863 2466 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 00:28:43.939900 kubelet[2466]: I0515 00:28:43.939873 2466 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 00:28:43.939900 kubelet[2466]: I0515 00:28:43.939890 2466 policy_none.go:49] "None policy: Start" May 15 00:28:43.940390 kubelet[2466]: I0515 00:28:43.940367 2466 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 00:28:43.940655 kubelet[2466]: I0515 00:28:43.940500 2466 state_mem.go:35] "Initializing new in-memory state store" May 15 00:28:43.940797 kubelet[2466]: I0515 00:28:43.940763 2466 state_mem.go:75] "Updated machine memory state" May 15 00:28:43.944682 kubelet[2466]: I0515 00:28:43.944659 2466 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:28:43.944838 kubelet[2466]: I0515 00:28:43.944820 2466 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 00:28:43.944872 kubelet[2466]: I0515 00:28:43.944836 2466 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:28:43.945090 kubelet[2466]: I0515 00:28:43.945065 2466 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:28:44.050344 kubelet[2466]: I0515 00:28:44.050251 2466 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 00:28:44.056819 kubelet[2466]: I0515 00:28:44.056787 2466 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 15 00:28:44.056990 kubelet[2466]: I0515 00:28:44.056868 2466 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 15 00:28:44.096553 kubelet[2466]: I0515 00:28:44.096502 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/87535bec6648f22e83671a9105ff27ff-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"87535bec6648f22e83671a9105ff27ff\") " pod="kube-system/kube-apiserver-localhost" May 15 00:28:44.096553 kubelet[2466]: I0515 00:28:44.096547 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:28:44.096710 kubelet[2466]: I0515 00:28:44.096577 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:28:44.096710 kubelet[2466]: I0515 00:28:44.096594 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/87535bec6648f22e83671a9105ff27ff-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"87535bec6648f22e83671a9105ff27ff\") " pod="kube-system/kube-apiserver-localhost" May 15 00:28:44.096710 kubelet[2466]: I0515 00:28:44.096612 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/87535bec6648f22e83671a9105ff27ff-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"87535bec6648f22e83671a9105ff27ff\") " pod="kube-system/kube-apiserver-localhost" May 15 00:28:44.096710 kubelet[2466]: I0515 00:28:44.096629 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:28:44.096710 kubelet[2466]: I0515 00:28:44.096642 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:28:44.096850 kubelet[2466]: I0515 00:28:44.096657 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:28:44.096850 kubelet[2466]: I0515 00:28:44.096673 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 15 00:28:44.317971 kubelet[2466]: E0515 00:28:44.317872 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:44.317971 kubelet[2466]: E0515 00:28:44.317932 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:44.317971 kubelet[2466]: E0515 00:28:44.317956 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:44.883043 kubelet[2466]: I0515 00:28:44.883010 2466 apiserver.go:52] "Watching apiserver" May 15 00:28:44.895641 kubelet[2466]: I0515 00:28:44.895587 2466 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 15 00:28:44.926804 kubelet[2466]: E0515 00:28:44.926286 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:44.927042 kubelet[2466]: E0515 00:28:44.927022 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:44.932519 kubelet[2466]: E0515 00:28:44.932476 2466 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 15 00:28:44.933520 kubelet[2466]: E0515 00:28:44.932757 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:44.943866 kubelet[2466]: I0515 00:28:44.943801 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.943785003 podStartE2EDuration="943.785003ms" podCreationTimestamp="2025-05-15 00:28:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:28:44.943723763 +0000 UTC m=+1.112532161" watchObservedRunningTime="2025-05-15 00:28:44.943785003 +0000 UTC m=+1.112593401" May 15 00:28:44.957702 kubelet[2466]: I0515 00:28:44.957454 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.957434563 podStartE2EDuration="957.434563ms" podCreationTimestamp="2025-05-15 00:28:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:28:44.950860923 +0000 UTC m=+1.119669321" watchObservedRunningTime="2025-05-15 00:28:44.957434563 +0000 UTC m=+1.126242961" May 15 00:28:44.957702 kubelet[2466]: I0515 00:28:44.957622 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.957616843 podStartE2EDuration="957.616843ms" podCreationTimestamp="2025-05-15 00:28:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:28:44.957287483 +0000 UTC m=+1.126095881" watchObservedRunningTime="2025-05-15 00:28:44.957616843 +0000 UTC m=+1.126425241" May 15 00:28:45.930239 kubelet[2466]: E0515 00:28:45.930195 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:45.930563 kubelet[2466]: E0515 00:28:45.930309 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:48.087529 kubelet[2466]: E0515 00:28:48.087459 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:48.258398 kubelet[2466]: I0515 00:28:48.258346 2466 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 00:28:48.259425 containerd[1440]: time="2025-05-15T00:28:48.258693351Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 00:28:48.259721 kubelet[2466]: I0515 00:28:48.258905 2466 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 00:28:48.705055 sudo[1618]: pam_unix(sudo:session): session closed for user root May 15 00:28:48.707107 sshd[1615]: pam_unix(sshd:session): session closed for user core May 15 00:28:48.710523 systemd[1]: sshd@6-10.0.0.112:22-10.0.0.1:36466.service: Deactivated successfully. May 15 00:28:48.712220 systemd[1]: session-7.scope: Deactivated successfully. May 15 00:28:48.712385 systemd[1]: session-7.scope: Consumed 6.652s CPU time, 151.2M memory peak, 0B memory swap peak. May 15 00:28:48.712930 systemd-logind[1420]: Session 7 logged out. Waiting for processes to exit. May 15 00:28:48.716561 systemd-logind[1420]: Removed session 7. May 15 00:28:49.293192 systemd[1]: Created slice kubepods-besteffort-pod79efdbfa_5f3f_4a1b_8ca2_704d1360c1d5.slice - libcontainer container kubepods-besteffort-pod79efdbfa_5f3f_4a1b_8ca2_704d1360c1d5.slice. May 15 00:28:49.357408 systemd[1]: Created slice kubepods-besteffort-pod636e02d5_ff36_41ab_99d9_165a4b66a2bf.slice - libcontainer container kubepods-besteffort-pod636e02d5_ff36_41ab_99d9_165a4b66a2bf.slice. May 15 00:28:49.433066 kubelet[2466]: I0515 00:28:49.433028 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/79efdbfa-5f3f-4a1b-8ca2-704d1360c1d5-kube-proxy\") pod \"kube-proxy-dkxzp\" (UID: \"79efdbfa-5f3f-4a1b-8ca2-704d1360c1d5\") " pod="kube-system/kube-proxy-dkxzp" May 15 00:28:49.433464 kubelet[2466]: I0515 00:28:49.433435 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79efdbfa-5f3f-4a1b-8ca2-704d1360c1d5-xtables-lock\") pod \"kube-proxy-dkxzp\" (UID: \"79efdbfa-5f3f-4a1b-8ca2-704d1360c1d5\") " pod="kube-system/kube-proxy-dkxzp" May 15 00:28:49.433598 kubelet[2466]: I0515 00:28:49.433489 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79efdbfa-5f3f-4a1b-8ca2-704d1360c1d5-lib-modules\") pod \"kube-proxy-dkxzp\" (UID: \"79efdbfa-5f3f-4a1b-8ca2-704d1360c1d5\") " pod="kube-system/kube-proxy-dkxzp" May 15 00:28:49.433598 kubelet[2466]: I0515 00:28:49.433528 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-262p2\" (UniqueName: \"kubernetes.io/projected/79efdbfa-5f3f-4a1b-8ca2-704d1360c1d5-kube-api-access-262p2\") pod \"kube-proxy-dkxzp\" (UID: \"79efdbfa-5f3f-4a1b-8ca2-704d1360c1d5\") " pod="kube-system/kube-proxy-dkxzp" May 15 00:28:49.498424 kubelet[2466]: E0515 00:28:49.498346 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:49.534474 kubelet[2466]: I0515 00:28:49.534416 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/636e02d5-ff36-41ab-99d9-165a4b66a2bf-var-lib-calico\") pod \"tigera-operator-6f6897fdc5-c5vch\" (UID: \"636e02d5-ff36-41ab-99d9-165a4b66a2bf\") " pod="tigera-operator/tigera-operator-6f6897fdc5-c5vch" May 15 00:28:49.534474 kubelet[2466]: I0515 00:28:49.534485 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r82q\" (UniqueName: \"kubernetes.io/projected/636e02d5-ff36-41ab-99d9-165a4b66a2bf-kube-api-access-7r82q\") pod \"tigera-operator-6f6897fdc5-c5vch\" (UID: \"636e02d5-ff36-41ab-99d9-165a4b66a2bf\") " pod="tigera-operator/tigera-operator-6f6897fdc5-c5vch" May 15 00:28:49.607978 kubelet[2466]: E0515 00:28:49.607869 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:49.608503 containerd[1440]: time="2025-05-15T00:28:49.608468210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dkxzp,Uid:79efdbfa-5f3f-4a1b-8ca2-704d1360c1d5,Namespace:kube-system,Attempt:0,}" May 15 00:28:49.626802 containerd[1440]: time="2025-05-15T00:28:49.626674789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:28:49.626802 containerd[1440]: time="2025-05-15T00:28:49.626737669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:28:49.626802 containerd[1440]: time="2025-05-15T00:28:49.626749549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:28:49.626936 containerd[1440]: time="2025-05-15T00:28:49.626843510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:28:49.648951 systemd[1]: Started cri-containerd-af4648e4b11a0160f2f43e4ceedbaf9ca1e4b1661c2ffe3e3ec5df7f804bbe8b.scope - libcontainer container af4648e4b11a0160f2f43e4ceedbaf9ca1e4b1661c2ffe3e3ec5df7f804bbe8b. May 15 00:28:49.661894 containerd[1440]: time="2025-05-15T00:28:49.661842741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-c5vch,Uid:636e02d5-ff36-41ab-99d9-165a4b66a2bf,Namespace:tigera-operator,Attempt:0,}" May 15 00:28:49.667624 containerd[1440]: time="2025-05-15T00:28:49.667555452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dkxzp,Uid:79efdbfa-5f3f-4a1b-8ca2-704d1360c1d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"af4648e4b11a0160f2f43e4ceedbaf9ca1e4b1661c2ffe3e3ec5df7f804bbe8b\"" May 15 00:28:49.668423 kubelet[2466]: E0515 00:28:49.668401 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:49.670338 containerd[1440]: time="2025-05-15T00:28:49.670303867Z" level=info msg="CreateContainer within sandbox \"af4648e4b11a0160f2f43e4ceedbaf9ca1e4b1661c2ffe3e3ec5df7f804bbe8b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 00:28:49.683976 containerd[1440]: time="2025-05-15T00:28:49.683932981Z" level=info msg="CreateContainer within sandbox \"af4648e4b11a0160f2f43e4ceedbaf9ca1e4b1661c2ffe3e3ec5df7f804bbe8b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cf13c2340633b680b5890d248fa321241121bca8b836ba63fee6498ee9e4709a\"" May 15 00:28:49.684730 containerd[1440]: time="2025-05-15T00:28:49.684702305Z" level=info msg="StartContainer for \"cf13c2340633b680b5890d248fa321241121bca8b836ba63fee6498ee9e4709a\"" May 15 00:28:49.685864 containerd[1440]: time="2025-05-15T00:28:49.685483349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:28:49.685864 containerd[1440]: time="2025-05-15T00:28:49.685551030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:28:49.685864 containerd[1440]: time="2025-05-15T00:28:49.685562630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:28:49.685864 containerd[1440]: time="2025-05-15T00:28:49.685648510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:28:49.704940 systemd[1]: Started cri-containerd-704dfaff016ad98b61469b398ecc5ba0eadc4a44ad259ed5df211f44e36ebacf.scope - libcontainer container 704dfaff016ad98b61469b398ecc5ba0eadc4a44ad259ed5df211f44e36ebacf. May 15 00:28:49.707762 systemd[1]: Started cri-containerd-cf13c2340633b680b5890d248fa321241121bca8b836ba63fee6498ee9e4709a.scope - libcontainer container cf13c2340633b680b5890d248fa321241121bca8b836ba63fee6498ee9e4709a. May 15 00:28:49.742411 containerd[1440]: time="2025-05-15T00:28:49.742347499Z" level=info msg="StartContainer for \"cf13c2340633b680b5890d248fa321241121bca8b836ba63fee6498ee9e4709a\" returns successfully" May 15 00:28:49.742508 containerd[1440]: time="2025-05-15T00:28:49.742447739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6f6897fdc5-c5vch,Uid:636e02d5-ff36-41ab-99d9-165a4b66a2bf,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"704dfaff016ad98b61469b398ecc5ba0eadc4a44ad259ed5df211f44e36ebacf\"" May 15 00:28:49.751345 containerd[1440]: time="2025-05-15T00:28:49.750941466Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 15 00:28:49.943240 kubelet[2466]: E0515 00:28:49.943096 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:49.947645 kubelet[2466]: E0515 00:28:49.947479 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:49.956932 kubelet[2466]: I0515 00:28:49.955925 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dkxzp" podStartSLOduration=0.955910062 podStartE2EDuration="955.910062ms" podCreationTimestamp="2025-05-15 00:28:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:28:49.955068217 +0000 UTC m=+6.123876575" watchObservedRunningTime="2025-05-15 00:28:49.955910062 +0000 UTC m=+6.124718460" May 15 00:28:51.171839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3525024142.mount: Deactivated successfully. May 15 00:28:51.552452 containerd[1440]: time="2025-05-15T00:28:51.552340330Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:51.553530 containerd[1440]: time="2025-05-15T00:28:51.553484215Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.7: active requests=0, bytes read=19323084" May 15 00:28:51.554341 containerd[1440]: time="2025-05-15T00:28:51.554310339Z" level=info msg="ImageCreate event name:\"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:51.556874 containerd[1440]: time="2025-05-15T00:28:51.556636870Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:51.557451 containerd[1440]: time="2025-05-15T00:28:51.557354914Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.7\" with image id \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\", repo tag \"quay.io/tigera/operator:v1.36.7\", repo digest \"quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9\", size \"19319079\" in 1.806374568s" May 15 00:28:51.557451 containerd[1440]: time="2025-05-15T00:28:51.557411794Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:27f7c2cfac802523e44ecd16453a4cc992f6c7d610c13054f2715a7cb4370565\"" May 15 00:28:51.561951 containerd[1440]: time="2025-05-15T00:28:51.561906135Z" level=info msg="CreateContainer within sandbox \"704dfaff016ad98b61469b398ecc5ba0eadc4a44ad259ed5df211f44e36ebacf\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 15 00:28:51.571330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1047210424.mount: Deactivated successfully. May 15 00:28:51.576870 containerd[1440]: time="2025-05-15T00:28:51.576832167Z" level=info msg="CreateContainer within sandbox \"704dfaff016ad98b61469b398ecc5ba0eadc4a44ad259ed5df211f44e36ebacf\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"11226f72e0639b480b6fb043f240fcd1cf6668c21e4d4347e37cc737af985e56\"" May 15 00:28:51.578143 containerd[1440]: time="2025-05-15T00:28:51.578108093Z" level=info msg="StartContainer for \"11226f72e0639b480b6fb043f240fcd1cf6668c21e4d4347e37cc737af985e56\"" May 15 00:28:51.605944 systemd[1]: Started cri-containerd-11226f72e0639b480b6fb043f240fcd1cf6668c21e4d4347e37cc737af985e56.scope - libcontainer container 11226f72e0639b480b6fb043f240fcd1cf6668c21e4d4347e37cc737af985e56. May 15 00:28:51.626112 containerd[1440]: time="2025-05-15T00:28:51.626071162Z" level=info msg="StartContainer for \"11226f72e0639b480b6fb043f240fcd1cf6668c21e4d4347e37cc737af985e56\" returns successfully" May 15 00:28:51.968699 kubelet[2466]: I0515 00:28:51.968516 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6f6897fdc5-c5vch" podStartSLOduration=1.156706486 podStartE2EDuration="2.968502361s" podCreationTimestamp="2025-05-15 00:28:49 +0000 UTC" firstStartedPulling="2025-05-15 00:28:49.748920975 +0000 UTC m=+5.917729333" lastFinishedPulling="2025-05-15 00:28:51.56071681 +0000 UTC m=+7.729525208" observedRunningTime="2025-05-15 00:28:51.968472081 +0000 UTC m=+8.137280479" watchObservedRunningTime="2025-05-15 00:28:51.968502361 +0000 UTC m=+8.137310839" May 15 00:28:54.885461 kubelet[2466]: E0515 00:28:54.885426 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:55.320441 systemd[1]: Created slice kubepods-besteffort-pod4c1532cf_b326_4e29_90e4_a5b825f1b6c9.slice - libcontainer container kubepods-besteffort-pod4c1532cf_b326_4e29_90e4_a5b825f1b6c9.slice. May 15 00:28:55.350843 systemd[1]: Created slice kubepods-besteffort-podd855040b_9f87_4e62_9b2c_54d30ca287b6.slice - libcontainer container kubepods-besteffort-podd855040b_9f87_4e62_9b2c_54d30ca287b6.slice. May 15 00:28:55.472025 kubelet[2466]: E0515 00:28:55.471197 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-64gbz" podUID="f966a81f-0997-40a6-9fea-29e50dec8072" May 15 00:28:55.472413 kubelet[2466]: I0515 00:28:55.472229 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c1532cf-b326-4e29-90e4-a5b825f1b6c9-tigera-ca-bundle\") pod \"calico-typha-85f596cd8d-w9cp8\" (UID: \"4c1532cf-b326-4e29-90e4-a5b825f1b6c9\") " pod="calico-system/calico-typha-85f596cd8d-w9cp8" May 15 00:28:55.472413 kubelet[2466]: I0515 00:28:55.472314 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d855040b-9f87-4e62-9b2c-54d30ca287b6-policysync\") pod \"calico-node-p48lx\" (UID: \"d855040b-9f87-4e62-9b2c-54d30ca287b6\") " pod="calico-system/calico-node-p48lx" May 15 00:28:55.472413 kubelet[2466]: I0515 00:28:55.472360 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d855040b-9f87-4e62-9b2c-54d30ca287b6-var-lib-calico\") pod \"calico-node-p48lx\" (UID: \"d855040b-9f87-4e62-9b2c-54d30ca287b6\") " pod="calico-system/calico-node-p48lx" May 15 00:28:55.472413 kubelet[2466]: I0515 00:28:55.472387 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4c1532cf-b326-4e29-90e4-a5b825f1b6c9-typha-certs\") pod \"calico-typha-85f596cd8d-w9cp8\" (UID: \"4c1532cf-b326-4e29-90e4-a5b825f1b6c9\") " pod="calico-system/calico-typha-85f596cd8d-w9cp8" May 15 00:28:55.472702 kubelet[2466]: I0515 00:28:55.472569 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d855040b-9f87-4e62-9b2c-54d30ca287b6-cni-bin-dir\") pod \"calico-node-p48lx\" (UID: \"d855040b-9f87-4e62-9b2c-54d30ca287b6\") " pod="calico-system/calico-node-p48lx" May 15 00:28:55.472702 kubelet[2466]: I0515 00:28:55.472593 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d855040b-9f87-4e62-9b2c-54d30ca287b6-flexvol-driver-host\") pod \"calico-node-p48lx\" (UID: \"d855040b-9f87-4e62-9b2c-54d30ca287b6\") " pod="calico-system/calico-node-p48lx" May 15 00:28:55.472702 kubelet[2466]: I0515 00:28:55.472651 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpqc9\" (UniqueName: \"kubernetes.io/projected/d855040b-9f87-4e62-9b2c-54d30ca287b6-kube-api-access-bpqc9\") pod \"calico-node-p48lx\" (UID: \"d855040b-9f87-4e62-9b2c-54d30ca287b6\") " pod="calico-system/calico-node-p48lx" May 15 00:28:55.472702 kubelet[2466]: I0515 00:28:55.472669 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cqzq\" (UniqueName: \"kubernetes.io/projected/4c1532cf-b326-4e29-90e4-a5b825f1b6c9-kube-api-access-9cqzq\") pod \"calico-typha-85f596cd8d-w9cp8\" (UID: \"4c1532cf-b326-4e29-90e4-a5b825f1b6c9\") " pod="calico-system/calico-typha-85f596cd8d-w9cp8" May 15 00:28:55.472702 kubelet[2466]: I0515 00:28:55.472683 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d855040b-9f87-4e62-9b2c-54d30ca287b6-xtables-lock\") pod \"calico-node-p48lx\" (UID: \"d855040b-9f87-4e62-9b2c-54d30ca287b6\") " pod="calico-system/calico-node-p48lx" May 15 00:28:55.473818 kubelet[2466]: I0515 00:28:55.473528 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d855040b-9f87-4e62-9b2c-54d30ca287b6-cni-net-dir\") pod \"calico-node-p48lx\" (UID: \"d855040b-9f87-4e62-9b2c-54d30ca287b6\") " pod="calico-system/calico-node-p48lx" May 15 00:28:55.473818 kubelet[2466]: I0515 00:28:55.473576 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d855040b-9f87-4e62-9b2c-54d30ca287b6-var-run-calico\") pod \"calico-node-p48lx\" (UID: \"d855040b-9f87-4e62-9b2c-54d30ca287b6\") " pod="calico-system/calico-node-p48lx" May 15 00:28:55.473818 kubelet[2466]: I0515 00:28:55.473593 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d855040b-9f87-4e62-9b2c-54d30ca287b6-node-certs\") pod \"calico-node-p48lx\" (UID: \"d855040b-9f87-4e62-9b2c-54d30ca287b6\") " pod="calico-system/calico-node-p48lx" May 15 00:28:55.473818 kubelet[2466]: I0515 00:28:55.473609 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d855040b-9f87-4e62-9b2c-54d30ca287b6-cni-log-dir\") pod \"calico-node-p48lx\" (UID: \"d855040b-9f87-4e62-9b2c-54d30ca287b6\") " pod="calico-system/calico-node-p48lx" May 15 00:28:55.473818 kubelet[2466]: I0515 00:28:55.473626 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d855040b-9f87-4e62-9b2c-54d30ca287b6-lib-modules\") pod \"calico-node-p48lx\" (UID: \"d855040b-9f87-4e62-9b2c-54d30ca287b6\") " pod="calico-system/calico-node-p48lx" May 15 00:28:55.473972 kubelet[2466]: I0515 00:28:55.473678 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d855040b-9f87-4e62-9b2c-54d30ca287b6-tigera-ca-bundle\") pod \"calico-node-p48lx\" (UID: \"d855040b-9f87-4e62-9b2c-54d30ca287b6\") " pod="calico-system/calico-node-p48lx" May 15 00:28:55.576155 kubelet[2466]: I0515 00:28:55.575046 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f966a81f-0997-40a6-9fea-29e50dec8072-socket-dir\") pod \"csi-node-driver-64gbz\" (UID: \"f966a81f-0997-40a6-9fea-29e50dec8072\") " pod="calico-system/csi-node-driver-64gbz" May 15 00:28:55.576417 kubelet[2466]: I0515 00:28:55.576395 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f966a81f-0997-40a6-9fea-29e50dec8072-registration-dir\") pod \"csi-node-driver-64gbz\" (UID: \"f966a81f-0997-40a6-9fea-29e50dec8072\") " pod="calico-system/csi-node-driver-64gbz" May 15 00:28:55.576634 kubelet[2466]: I0515 00:28:55.576618 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f966a81f-0997-40a6-9fea-29e50dec8072-kubelet-dir\") pod \"csi-node-driver-64gbz\" (UID: \"f966a81f-0997-40a6-9fea-29e50dec8072\") " pod="calico-system/csi-node-driver-64gbz" May 15 00:28:55.576735 kubelet[2466]: I0515 00:28:55.576722 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f966a81f-0997-40a6-9fea-29e50dec8072-varrun\") pod \"csi-node-driver-64gbz\" (UID: \"f966a81f-0997-40a6-9fea-29e50dec8072\") " pod="calico-system/csi-node-driver-64gbz" May 15 00:28:55.576835 kubelet[2466]: I0515 00:28:55.576819 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkc75\" (UniqueName: \"kubernetes.io/projected/f966a81f-0997-40a6-9fea-29e50dec8072-kube-api-access-zkc75\") pod \"csi-node-driver-64gbz\" (UID: \"f966a81f-0997-40a6-9fea-29e50dec8072\") " pod="calico-system/csi-node-driver-64gbz" May 15 00:28:55.590594 kubelet[2466]: E0515 00:28:55.589976 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.590594 kubelet[2466]: W0515 00:28:55.590008 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.590594 kubelet[2466]: E0515 00:28:55.590030 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.590594 kubelet[2466]: E0515 00:28:55.590286 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.590594 kubelet[2466]: W0515 00:28:55.590295 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.590594 kubelet[2466]: E0515 00:28:55.590452 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.590594 kubelet[2466]: W0515 00:28:55.590460 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.590594 kubelet[2466]: E0515 00:28:55.590596 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.590594 kubelet[2466]: W0515 00:28:55.590603 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.591585 kubelet[2466]: E0515 00:28:55.590728 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.591585 kubelet[2466]: W0515 00:28:55.590734 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.591585 kubelet[2466]: E0515 00:28:55.590743 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.591585 kubelet[2466]: E0515 00:28:55.590800 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.591585 kubelet[2466]: E0515 00:28:55.590941 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.591585 kubelet[2466]: W0515 00:28:55.590948 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.591585 kubelet[2466]: E0515 00:28:55.590956 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.591585 kubelet[2466]: E0515 00:28:55.591107 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.591585 kubelet[2466]: W0515 00:28:55.591114 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.591585 kubelet[2466]: E0515 00:28:55.591121 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.591862 kubelet[2466]: E0515 00:28:55.591375 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.591862 kubelet[2466]: E0515 00:28:55.591417 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.591862 kubelet[2466]: W0515 00:28:55.591442 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.591862 kubelet[2466]: E0515 00:28:55.591423 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.591862 kubelet[2466]: E0515 00:28:55.591456 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.596797 kubelet[2466]: E0515 00:28:55.596298 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.596797 kubelet[2466]: W0515 00:28:55.596318 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.596797 kubelet[2466]: E0515 00:28:55.596339 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.596797 kubelet[2466]: E0515 00:28:55.596594 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.596797 kubelet[2466]: W0515 00:28:55.596606 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.596797 kubelet[2466]: E0515 00:28:55.596616 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.626846 kubelet[2466]: E0515 00:28:55.626804 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:55.627586 containerd[1440]: time="2025-05-15T00:28:55.627546547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85f596cd8d-w9cp8,Uid:4c1532cf-b326-4e29-90e4-a5b825f1b6c9,Namespace:calico-system,Attempt:0,}" May 15 00:28:55.650863 containerd[1440]: time="2025-05-15T00:28:55.650748153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:28:55.650863 containerd[1440]: time="2025-05-15T00:28:55.650832873Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:28:55.650863 containerd[1440]: time="2025-05-15T00:28:55.650845673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:28:55.652793 containerd[1440]: time="2025-05-15T00:28:55.650940474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:28:55.655443 kubelet[2466]: E0515 00:28:55.653748 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:55.655569 containerd[1440]: time="2025-05-15T00:28:55.655419010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p48lx,Uid:d855040b-9f87-4e62-9b2c-54d30ca287b6,Namespace:calico-system,Attempt:0,}" May 15 00:28:55.669959 systemd[1]: Started cri-containerd-aba867a5fbf23e5039724b980793ec34f5548e6cf9bcd0687ed155dee006966e.scope - libcontainer container aba867a5fbf23e5039724b980793ec34f5548e6cf9bcd0687ed155dee006966e. May 15 00:28:55.679761 kubelet[2466]: E0515 00:28:55.679728 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.679761 kubelet[2466]: W0515 00:28:55.679755 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.679955 kubelet[2466]: E0515 00:28:55.679794 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.680504 kubelet[2466]: E0515 00:28:55.680421 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.680504 kubelet[2466]: W0515 00:28:55.680436 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.680504 kubelet[2466]: E0515 00:28:55.680456 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.682358 kubelet[2466]: E0515 00:28:55.681855 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.682358 kubelet[2466]: W0515 00:28:55.681875 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.682358 kubelet[2466]: E0515 00:28:55.681895 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.684372 kubelet[2466]: E0515 00:28:55.684200 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.684372 kubelet[2466]: W0515 00:28:55.684247 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.684372 kubelet[2466]: E0515 00:28:55.684270 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.684595 kubelet[2466]: E0515 00:28:55.684581 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.684683 kubelet[2466]: W0515 00:28:55.684671 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.684874 kubelet[2466]: E0515 00:28:55.684781 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.685036 kubelet[2466]: E0515 00:28:55.684997 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.685036 kubelet[2466]: W0515 00:28:55.685012 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.685247 kubelet[2466]: E0515 00:28:55.685160 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.685491 kubelet[2466]: E0515 00:28:55.685471 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.685661 kubelet[2466]: W0515 00:28:55.685564 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.685661 kubelet[2466]: E0515 00:28:55.685613 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.686680 kubelet[2466]: E0515 00:28:55.686567 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.686680 kubelet[2466]: W0515 00:28:55.686581 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.686680 kubelet[2466]: E0515 00:28:55.686622 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.686899 kubelet[2466]: E0515 00:28:55.686883 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.686976 kubelet[2466]: W0515 00:28:55.686963 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.687091 kubelet[2466]: E0515 00:28:55.687069 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.687714 kubelet[2466]: E0515 00:28:55.687598 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.687714 kubelet[2466]: W0515 00:28:55.687619 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.687714 kubelet[2466]: E0515 00:28:55.687680 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.687948 kubelet[2466]: E0515 00:28:55.687932 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.688126 kubelet[2466]: W0515 00:28:55.688017 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.688126 kubelet[2466]: E0515 00:28:55.688056 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.688268 kubelet[2466]: E0515 00:28:55.688254 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.688346 kubelet[2466]: W0515 00:28:55.688333 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.688527 kubelet[2466]: E0515 00:28:55.688437 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.688749 kubelet[2466]: E0515 00:28:55.688616 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.688749 kubelet[2466]: W0515 00:28:55.688629 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.688749 kubelet[2466]: E0515 00:28:55.688660 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.688999 kubelet[2466]: E0515 00:28:55.688984 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.689095 kubelet[2466]: W0515 00:28:55.689083 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.689262 kubelet[2466]: E0515 00:28:55.689203 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.689437 kubelet[2466]: E0515 00:28:55.689420 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.689619 kubelet[2466]: W0515 00:28:55.689517 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.689619 kubelet[2466]: E0515 00:28:55.689560 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.689911 kubelet[2466]: E0515 00:28:55.689815 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.689911 kubelet[2466]: W0515 00:28:55.689829 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.689911 kubelet[2466]: E0515 00:28:55.689856 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.690101 kubelet[2466]: E0515 00:28:55.690088 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.690170 kubelet[2466]: W0515 00:28:55.690158 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.690307 kubelet[2466]: E0515 00:28:55.690286 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.690531 kubelet[2466]: E0515 00:28:55.690516 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.690714 kubelet[2466]: W0515 00:28:55.690598 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.690714 kubelet[2466]: E0515 00:28:55.690630 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.690911 kubelet[2466]: E0515 00:28:55.690896 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.690964 kubelet[2466]: W0515 00:28:55.690953 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.691046 kubelet[2466]: E0515 00:28:55.691021 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.691352 kubelet[2466]: E0515 00:28:55.691334 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.691427 kubelet[2466]: W0515 00:28:55.691413 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.691580 kubelet[2466]: E0515 00:28:55.691508 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.691712 kubelet[2466]: E0515 00:28:55.691699 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.692170 kubelet[2466]: W0515 00:28:55.692150 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.692809 kubelet[2466]: E0515 00:28:55.692789 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.695605 kubelet[2466]: E0515 00:28:55.693052 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.695605 kubelet[2466]: W0515 00:28:55.693064 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.695605 kubelet[2466]: E0515 00:28:55.693150 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.695605 kubelet[2466]: E0515 00:28:55.693253 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.695605 kubelet[2466]: W0515 00:28:55.693262 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.695605 kubelet[2466]: E0515 00:28:55.693295 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.695605 kubelet[2466]: E0515 00:28:55.693454 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.695605 kubelet[2466]: W0515 00:28:55.693465 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.695605 kubelet[2466]: E0515 00:28:55.693483 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.695605 kubelet[2466]: E0515 00:28:55.693798 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.696076 kubelet[2466]: W0515 00:28:55.693809 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.696076 kubelet[2466]: E0515 00:28:55.693819 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.711630 containerd[1440]: time="2025-05-15T00:28:55.711586938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85f596cd8d-w9cp8,Uid:4c1532cf-b326-4e29-90e4-a5b825f1b6c9,Namespace:calico-system,Attempt:0,} returns sandbox id \"aba867a5fbf23e5039724b980793ec34f5548e6cf9bcd0687ed155dee006966e\"" May 15 00:28:55.712468 kubelet[2466]: E0515 00:28:55.712328 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:55.718435 containerd[1440]: time="2025-05-15T00:28:55.717481520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 15 00:28:55.719598 kubelet[2466]: E0515 00:28:55.719516 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:55.719598 kubelet[2466]: W0515 00:28:55.719548 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:55.719598 kubelet[2466]: E0515 00:28:55.719574 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:55.745168 containerd[1440]: time="2025-05-15T00:28:55.745047742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:28:55.745168 containerd[1440]: time="2025-05-15T00:28:55.745111662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:28:55.745168 containerd[1440]: time="2025-05-15T00:28:55.745138622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:28:55.745396 containerd[1440]: time="2025-05-15T00:28:55.745240942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:28:55.766948 systemd[1]: Started cri-containerd-0b05f905e7952b4d64789137022d620b68803f6186fd0bc048b4e50883d14aa3.scope - libcontainer container 0b05f905e7952b4d64789137022d620b68803f6186fd0bc048b4e50883d14aa3. May 15 00:28:55.792583 containerd[1440]: time="2025-05-15T00:28:55.792541797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p48lx,Uid:d855040b-9f87-4e62-9b2c-54d30ca287b6,Namespace:calico-system,Attempt:0,} returns sandbox id \"0b05f905e7952b4d64789137022d620b68803f6186fd0bc048b4e50883d14aa3\"" May 15 00:28:55.793580 kubelet[2466]: E0515 00:28:55.793415 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:56.909065 kubelet[2466]: E0515 00:28:56.908989 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-64gbz" podUID="f966a81f-0997-40a6-9fea-29e50dec8072" May 15 00:28:58.100548 kubelet[2466]: E0515 00:28:58.100500 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:58.194410 kubelet[2466]: E0515 00:28:58.194363 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:58.194563 kubelet[2466]: W0515 00:28:58.194425 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:58.194563 kubelet[2466]: E0515 00:28:58.194448 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:58.194707 kubelet[2466]: E0515 00:28:58.194694 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:58.194748 kubelet[2466]: W0515 00:28:58.194707 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:58.194748 kubelet[2466]: E0515 00:28:58.194718 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:58.194955 kubelet[2466]: E0515 00:28:58.194942 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:58.194955 kubelet[2466]: W0515 00:28:58.194955 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:58.195035 kubelet[2466]: E0515 00:28:58.194965 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:58.195210 kubelet[2466]: E0515 00:28:58.195199 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:58.195254 kubelet[2466]: W0515 00:28:58.195239 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:58.195254 kubelet[2466]: E0515 00:28:58.195251 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:58.195500 kubelet[2466]: E0515 00:28:58.195487 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:58.195500 kubelet[2466]: W0515 00:28:58.195500 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:58.195582 kubelet[2466]: E0515 00:28:58.195509 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:58.195724 kubelet[2466]: E0515 00:28:58.195712 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:58.195724 kubelet[2466]: W0515 00:28:58.195724 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:58.195789 kubelet[2466]: E0515 00:28:58.195733 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:58.195975 kubelet[2466]: E0515 00:28:58.195963 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:58.196009 kubelet[2466]: W0515 00:28:58.195978 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:58.196009 kubelet[2466]: E0515 00:28:58.195987 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:58.196195 kubelet[2466]: E0515 00:28:58.196183 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:58.196195 kubelet[2466]: W0515 00:28:58.196194 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:58.196259 kubelet[2466]: E0515 00:28:58.196208 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:58.196456 kubelet[2466]: E0515 00:28:58.196443 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:58.196491 kubelet[2466]: W0515 00:28:58.196456 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:58.196491 kubelet[2466]: E0515 00:28:58.196482 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:58.196741 kubelet[2466]: E0515 00:28:58.196728 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:58.196741 kubelet[2466]: W0515 00:28:58.196740 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:58.196832 kubelet[2466]: E0515 00:28:58.196749 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:58.197020 kubelet[2466]: E0515 00:28:58.197008 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:58.197020 kubelet[2466]: W0515 00:28:58.197020 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:58.197086 kubelet[2466]: E0515 00:28:58.197030 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:58.197330 kubelet[2466]: E0515 00:28:58.197313 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:58.197330 kubelet[2466]: W0515 00:28:58.197323 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:58.197330 kubelet[2466]: E0515 00:28:58.197332 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:58.197947 kubelet[2466]: E0515 00:28:58.197930 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:58.197947 kubelet[2466]: W0515 00:28:58.197946 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:58.198023 kubelet[2466]: E0515 00:28:58.197957 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:58.198380 kubelet[2466]: E0515 00:28:58.198362 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:58.198415 kubelet[2466]: W0515 00:28:58.198390 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:58.198415 kubelet[2466]: E0515 00:28:58.198401 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:58.198822 kubelet[2466]: E0515 00:28:58.198807 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:58.198857 kubelet[2466]: W0515 00:28:58.198825 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:58.198857 kubelet[2466]: E0515 00:28:58.198840 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:58.339139 containerd[1440]: time="2025-05-15T00:28:58.339087832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:58.340147 containerd[1440]: time="2025-05-15T00:28:58.340116635Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 15 00:28:58.341122 containerd[1440]: time="2025-05-15T00:28:58.341094558Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:58.342857 containerd[1440]: time="2025-05-15T00:28:58.342826963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:58.343711 containerd[1440]: time="2025-05-15T00:28:58.343628726Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 2.625210163s" May 15 00:28:58.343711 containerd[1440]: time="2025-05-15T00:28:58.343661686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 15 00:28:58.344987 containerd[1440]: time="2025-05-15T00:28:58.344806009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 15 00:28:58.356639 containerd[1440]: time="2025-05-15T00:28:58.356530565Z" level=info msg="CreateContainer within sandbox \"aba867a5fbf23e5039724b980793ec34f5548e6cf9bcd0687ed155dee006966e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 15 00:28:58.373574 containerd[1440]: time="2025-05-15T00:28:58.373438657Z" level=info msg="CreateContainer within sandbox \"aba867a5fbf23e5039724b980793ec34f5548e6cf9bcd0687ed155dee006966e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"757ddacdaaf51fe1b92e366623ddaf14edce21968326776f8e90e5762abedde6\"" May 15 00:28:58.374283 containerd[1440]: time="2025-05-15T00:28:58.374226899Z" level=info msg="StartContainer for \"757ddacdaaf51fe1b92e366623ddaf14edce21968326776f8e90e5762abedde6\"" May 15 00:28:58.403991 systemd[1]: Started cri-containerd-757ddacdaaf51fe1b92e366623ddaf14edce21968326776f8e90e5762abedde6.scope - libcontainer container 757ddacdaaf51fe1b92e366623ddaf14edce21968326776f8e90e5762abedde6. May 15 00:28:58.453386 containerd[1440]: time="2025-05-15T00:28:58.451893376Z" level=info msg="StartContainer for \"757ddacdaaf51fe1b92e366623ddaf14edce21968326776f8e90e5762abedde6\" returns successfully" May 15 00:28:58.608432 update_engine[1426]: I20250515 00:28:58.607429 1426 update_attempter.cc:509] Updating boot flags... May 15 00:28:58.643987 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3053) May 15 00:28:58.692813 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3054) May 15 00:28:58.909478 kubelet[2466]: E0515 00:28:58.909436 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-64gbz" podUID="f966a81f-0997-40a6-9fea-29e50dec8072" May 15 00:28:58.976869 kubelet[2466]: E0515 00:28:58.976814 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:58.977194 kubelet[2466]: E0515 00:28:58.976959 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:58.989614 kubelet[2466]: I0515 00:28:58.989396 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-85f596cd8d-w9cp8" podStartSLOduration=1.361000119 podStartE2EDuration="3.989369093s" podCreationTimestamp="2025-05-15 00:28:55 +0000 UTC" firstStartedPulling="2025-05-15 00:28:55.716266915 +0000 UTC m=+11.885075313" lastFinishedPulling="2025-05-15 00:28:58.344635889 +0000 UTC m=+14.513444287" observedRunningTime="2025-05-15 00:28:58.988066489 +0000 UTC m=+15.156874887" watchObservedRunningTime="2025-05-15 00:28:58.989369093 +0000 UTC m=+15.158177491" May 15 00:28:59.003183 kubelet[2466]: E0515 00:28:59.003152 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.003183 kubelet[2466]: W0515 00:28:59.003178 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.003451 kubelet[2466]: E0515 00:28:59.003198 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.003850 kubelet[2466]: E0515 00:28:59.003487 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.003850 kubelet[2466]: W0515 00:28:59.003504 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.003850 kubelet[2466]: E0515 00:28:59.003515 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.003850 kubelet[2466]: E0515 00:28:59.003708 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.003850 kubelet[2466]: W0515 00:28:59.003717 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.003850 kubelet[2466]: E0515 00:28:59.003726 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.004039 kubelet[2466]: E0515 00:28:59.004019 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.004039 kubelet[2466]: W0515 00:28:59.004035 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.004308 kubelet[2466]: E0515 00:28:59.004119 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.004903 kubelet[2466]: E0515 00:28:59.004886 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.004903 kubelet[2466]: W0515 00:28:59.004903 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.004974 kubelet[2466]: E0515 00:28:59.004913 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.005178 kubelet[2466]: E0515 00:28:59.005162 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.005178 kubelet[2466]: W0515 00:28:59.005176 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.005226 kubelet[2466]: E0515 00:28:59.005186 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.005842 kubelet[2466]: E0515 00:28:59.005824 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.005842 kubelet[2466]: W0515 00:28:59.005840 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.005928 kubelet[2466]: E0515 00:28:59.005851 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.006464 kubelet[2466]: E0515 00:28:59.006437 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.006464 kubelet[2466]: W0515 00:28:59.006454 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.006535 kubelet[2466]: E0515 00:28:59.006473 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.006927 kubelet[2466]: E0515 00:28:59.006675 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.006927 kubelet[2466]: W0515 00:28:59.006688 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.006927 kubelet[2466]: E0515 00:28:59.006701 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.006927 kubelet[2466]: E0515 00:28:59.006920 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.007055 kubelet[2466]: W0515 00:28:59.006929 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.007055 kubelet[2466]: E0515 00:28:59.006956 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.007441 kubelet[2466]: E0515 00:28:59.007421 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.007441 kubelet[2466]: W0515 00:28:59.007441 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.007507 kubelet[2466]: E0515 00:28:59.007453 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.008440 kubelet[2466]: E0515 00:28:59.008416 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.008440 kubelet[2466]: W0515 00:28:59.008433 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.008563 kubelet[2466]: E0515 00:28:59.008446 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.008761 kubelet[2466]: E0515 00:28:59.008740 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.008761 kubelet[2466]: W0515 00:28:59.008755 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.008854 kubelet[2466]: E0515 00:28:59.008787 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.009080 kubelet[2466]: E0515 00:28:59.009018 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.009080 kubelet[2466]: W0515 00:28:59.009035 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.009080 kubelet[2466]: E0515 00:28:59.009046 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.009471 kubelet[2466]: E0515 00:28:59.009232 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.009471 kubelet[2466]: W0515 00:28:59.009246 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.009471 kubelet[2466]: E0515 00:28:59.009255 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.009471 kubelet[2466]: E0515 00:28:59.009451 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.009471 kubelet[2466]: W0515 00:28:59.009461 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.009637 kubelet[2466]: E0515 00:28:59.009476 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.009637 kubelet[2466]: E0515 00:28:59.009614 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.009637 kubelet[2466]: W0515 00:28:59.009622 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.009637 kubelet[2466]: E0515 00:28:59.009631 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.009761 kubelet[2466]: E0515 00:28:59.009751 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.009761 kubelet[2466]: W0515 00:28:59.009758 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.009912 kubelet[2466]: E0515 00:28:59.009777 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.009959 kubelet[2466]: E0515 00:28:59.009928 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.009959 kubelet[2466]: W0515 00:28:59.009936 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.010016 kubelet[2466]: E0515 00:28:59.009947 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.010432 kubelet[2466]: E0515 00:28:59.010417 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.010472 kubelet[2466]: W0515 00:28:59.010431 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.010472 kubelet[2466]: E0515 00:28:59.010456 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.010836 kubelet[2466]: E0515 00:28:59.010822 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.010836 kubelet[2466]: W0515 00:28:59.010835 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.010968 kubelet[2466]: E0515 00:28:59.010936 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.011092 kubelet[2466]: E0515 00:28:59.011079 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.011092 kubelet[2466]: W0515 00:28:59.011092 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.011152 kubelet[2466]: E0515 00:28:59.011107 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.011883 kubelet[2466]: E0515 00:28:59.011866 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.011883 kubelet[2466]: W0515 00:28:59.011882 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.011971 kubelet[2466]: E0515 00:28:59.011898 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.012229 kubelet[2466]: E0515 00:28:59.012214 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.012264 kubelet[2466]: W0515 00:28:59.012229 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.012264 kubelet[2466]: E0515 00:28:59.012244 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.012455 kubelet[2466]: E0515 00:28:59.012442 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.012491 kubelet[2466]: W0515 00:28:59.012455 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.012491 kubelet[2466]: E0515 00:28:59.012512 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.012665 kubelet[2466]: E0515 00:28:59.012653 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.012700 kubelet[2466]: W0515 00:28:59.012666 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.012700 kubelet[2466]: E0515 00:28:59.012720 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.012874 kubelet[2466]: E0515 00:28:59.012857 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.012874 kubelet[2466]: W0515 00:28:59.012871 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.012950 kubelet[2466]: E0515 00:28:59.012899 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.013054 kubelet[2466]: E0515 00:28:59.013034 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.013054 kubelet[2466]: W0515 00:28:59.013046 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.013104 kubelet[2466]: E0515 00:28:59.013061 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.013251 kubelet[2466]: E0515 00:28:59.013219 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.013251 kubelet[2466]: W0515 00:28:59.013231 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.013251 kubelet[2466]: E0515 00:28:59.013245 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.013425 kubelet[2466]: E0515 00:28:59.013408 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.013425 kubelet[2466]: W0515 00:28:59.013421 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.013476 kubelet[2466]: E0515 00:28:59.013437 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.013614 kubelet[2466]: E0515 00:28:59.013600 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.013614 kubelet[2466]: W0515 00:28:59.013613 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.013670 kubelet[2466]: E0515 00:28:59.013627 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.013885 kubelet[2466]: E0515 00:28:59.013860 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.013885 kubelet[2466]: W0515 00:28:59.013875 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.013885 kubelet[2466]: E0515 00:28:59.013890 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.014203 kubelet[2466]: E0515 00:28:59.014055 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.014203 kubelet[2466]: W0515 00:28:59.014067 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.014203 kubelet[2466]: E0515 00:28:59.014145 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.014694 kubelet[2466]: E0515 00:28:59.014679 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.014694 kubelet[2466]: W0515 00:28:59.014694 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.014778 kubelet[2466]: E0515 00:28:59.014749 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.015043 kubelet[2466]: E0515 00:28:59.015028 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.015043 kubelet[2466]: W0515 00:28:59.015041 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.015178 kubelet[2466]: E0515 00:28:59.015100 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.015533 kubelet[2466]: E0515 00:28:59.015495 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.015533 kubelet[2466]: W0515 00:28:59.015512 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.016084 kubelet[2466]: E0515 00:28:59.015726 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.016084 kubelet[2466]: E0515 00:28:59.015730 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.016084 kubelet[2466]: W0515 00:28:59.015758 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.016084 kubelet[2466]: E0515 00:28:59.015794 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.016084 kubelet[2466]: E0515 00:28:59.015972 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.016084 kubelet[2466]: W0515 00:28:59.015981 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.016084 kubelet[2466]: E0515 00:28:59.015998 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.016225 kubelet[2466]: E0515 00:28:59.016178 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.016225 kubelet[2466]: W0515 00:28:59.016186 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.016225 kubelet[2466]: E0515 00:28:59.016194 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.016710 kubelet[2466]: E0515 00:28:59.016418 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.016710 kubelet[2466]: W0515 00:28:59.016432 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.016710 kubelet[2466]: E0515 00:28:59.016447 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.021569 kubelet[2466]: E0515 00:28:59.021442 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.021759 kubelet[2466]: W0515 00:28:59.021739 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.022158 kubelet[2466]: E0515 00:28:59.022142 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.022884 kubelet[2466]: E0515 00:28:59.022865 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.022884 kubelet[2466]: W0515 00:28:59.022882 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.022961 kubelet[2466]: E0515 00:28:59.022912 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.023145 kubelet[2466]: E0515 00:28:59.023131 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.023145 kubelet[2466]: W0515 00:28:59.023143 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.023204 kubelet[2466]: E0515 00:28:59.023153 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.023533 kubelet[2466]: E0515 00:28:59.023499 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.023533 kubelet[2466]: W0515 00:28:59.023517 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.023533 kubelet[2466]: E0515 00:28:59.023531 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.023724 kubelet[2466]: E0515 00:28:59.023691 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.023724 kubelet[2466]: W0515 00:28:59.023703 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.023724 kubelet[2466]: E0515 00:28:59.023714 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.024209 kubelet[2466]: E0515 00:28:59.023928 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.024209 kubelet[2466]: W0515 00:28:59.023937 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.024209 kubelet[2466]: E0515 00:28:59.023945 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.024349 kubelet[2466]: E0515 00:28:59.024333 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.025005 kubelet[2466]: W0515 00:28:59.024441 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.025005 kubelet[2466]: E0515 00:28:59.024468 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.025319 kubelet[2466]: E0515 00:28:59.025302 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:28:59.025424 kubelet[2466]: W0515 00:28:59.025409 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:28:59.025480 kubelet[2466]: E0515 00:28:59.025468 2466 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:28:59.786613 containerd[1440]: time="2025-05-15T00:28:59.786565891Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:59.787328 containerd[1440]: time="2025-05-15T00:28:59.787295053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=5122903" May 15 00:28:59.788487 containerd[1440]: time="2025-05-15T00:28:59.788243336Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:59.790746 containerd[1440]: time="2025-05-15T00:28:59.790688543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:28:59.791365 containerd[1440]: time="2025-05-15T00:28:59.791337065Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.446493536s" May 15 00:28:59.791417 containerd[1440]: time="2025-05-15T00:28:59.791366185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 15 00:28:59.793658 containerd[1440]: time="2025-05-15T00:28:59.793623431Z" level=info msg="CreateContainer within sandbox \"0b05f905e7952b4d64789137022d620b68803f6186fd0bc048b4e50883d14aa3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 15 00:28:59.812585 containerd[1440]: time="2025-05-15T00:28:59.812539645Z" level=info msg="CreateContainer within sandbox \"0b05f905e7952b4d64789137022d620b68803f6186fd0bc048b4e50883d14aa3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f9847df8cedb79cbf247ff89684d602d99f4769370a8a5244985a044bff90626\"" May 15 00:28:59.813902 containerd[1440]: time="2025-05-15T00:28:59.813068047Z" level=info msg="StartContainer for \"f9847df8cedb79cbf247ff89684d602d99f4769370a8a5244985a044bff90626\"" May 15 00:28:59.841949 systemd[1]: Started cri-containerd-f9847df8cedb79cbf247ff89684d602d99f4769370a8a5244985a044bff90626.scope - libcontainer container f9847df8cedb79cbf247ff89684d602d99f4769370a8a5244985a044bff90626. May 15 00:28:59.869570 containerd[1440]: time="2025-05-15T00:28:59.869520728Z" level=info msg="StartContainer for \"f9847df8cedb79cbf247ff89684d602d99f4769370a8a5244985a044bff90626\" returns successfully" May 15 00:28:59.911839 systemd[1]: cri-containerd-f9847df8cedb79cbf247ff89684d602d99f4769370a8a5244985a044bff90626.scope: Deactivated successfully. May 15 00:28:59.969412 containerd[1440]: time="2025-05-15T00:28:59.956531936Z" level=info msg="shim disconnected" id=f9847df8cedb79cbf247ff89684d602d99f4769370a8a5244985a044bff90626 namespace=k8s.io May 15 00:28:59.969412 containerd[1440]: time="2025-05-15T00:28:59.969407573Z" level=warning msg="cleaning up after shim disconnected" id=f9847df8cedb79cbf247ff89684d602d99f4769370a8a5244985a044bff90626 namespace=k8s.io May 15 00:28:59.969652 containerd[1440]: time="2025-05-15T00:28:59.969428773Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:28:59.981789 kubelet[2466]: I0515 00:28:59.981724 2466 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:28:59.983171 kubelet[2466]: E0515 00:28:59.982207 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:28:59.983171 kubelet[2466]: E0515 00:28:59.982503 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:00.351023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9847df8cedb79cbf247ff89684d602d99f4769370a8a5244985a044bff90626-rootfs.mount: Deactivated successfully. May 15 00:29:00.908909 kubelet[2466]: E0515 00:29:00.908867 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-64gbz" podUID="f966a81f-0997-40a6-9fea-29e50dec8072" May 15 00:29:00.984609 kubelet[2466]: E0515 00:29:00.984556 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:00.988181 containerd[1440]: time="2025-05-15T00:29:00.987474704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 15 00:29:02.909050 kubelet[2466]: E0515 00:29:02.908962 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-64gbz" podUID="f966a81f-0997-40a6-9fea-29e50dec8072" May 15 00:29:04.909520 kubelet[2466]: E0515 00:29:04.909469 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-64gbz" podUID="f966a81f-0997-40a6-9fea-29e50dec8072" May 15 00:29:05.574029 containerd[1440]: time="2025-05-15T00:29:05.573979427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:29:05.574586 containerd[1440]: time="2025-05-15T00:29:05.574471188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 15 00:29:05.577819 containerd[1440]: time="2025-05-15T00:29:05.577718035Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:29:05.584450 containerd[1440]: time="2025-05-15T00:29:05.582107083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:29:05.584450 containerd[1440]: time="2025-05-15T00:29:05.583394086Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 4.595880981s" May 15 00:29:05.584450 containerd[1440]: time="2025-05-15T00:29:05.583670006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 15 00:29:05.591045 containerd[1440]: time="2025-05-15T00:29:05.590987180Z" level=info msg="CreateContainer within sandbox \"0b05f905e7952b4d64789137022d620b68803f6186fd0bc048b4e50883d14aa3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 15 00:29:05.607851 containerd[1440]: time="2025-05-15T00:29:05.607752933Z" level=info msg="CreateContainer within sandbox \"0b05f905e7952b4d64789137022d620b68803f6186fd0bc048b4e50883d14aa3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c66f6cf002bb7c6536a97bee696f42673cea0620a0aa7769341fdd51cd4823e7\"" May 15 00:29:05.609037 containerd[1440]: time="2025-05-15T00:29:05.608994855Z" level=info msg="StartContainer for \"c66f6cf002bb7c6536a97bee696f42673cea0620a0aa7769341fdd51cd4823e7\"" May 15 00:29:05.641025 systemd[1]: Started cri-containerd-c66f6cf002bb7c6536a97bee696f42673cea0620a0aa7769341fdd51cd4823e7.scope - libcontainer container c66f6cf002bb7c6536a97bee696f42673cea0620a0aa7769341fdd51cd4823e7. May 15 00:29:05.680979 containerd[1440]: time="2025-05-15T00:29:05.680925315Z" level=info msg="StartContainer for \"c66f6cf002bb7c6536a97bee696f42673cea0620a0aa7769341fdd51cd4823e7\" returns successfully" May 15 00:29:05.996942 kubelet[2466]: E0515 00:29:05.996910 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:06.375621 systemd[1]: cri-containerd-c66f6cf002bb7c6536a97bee696f42673cea0620a0aa7769341fdd51cd4823e7.scope: Deactivated successfully. May 15 00:29:06.394727 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c66f6cf002bb7c6536a97bee696f42673cea0620a0aa7769341fdd51cd4823e7-rootfs.mount: Deactivated successfully. May 15 00:29:06.441329 containerd[1440]: time="2025-05-15T00:29:06.441265655Z" level=info msg="shim disconnected" id=c66f6cf002bb7c6536a97bee696f42673cea0620a0aa7769341fdd51cd4823e7 namespace=k8s.io May 15 00:29:06.441329 containerd[1440]: time="2025-05-15T00:29:06.441325535Z" level=warning msg="cleaning up after shim disconnected" id=c66f6cf002bb7c6536a97bee696f42673cea0620a0aa7769341fdd51cd4823e7 namespace=k8s.io May 15 00:29:06.441329 containerd[1440]: time="2025-05-15T00:29:06.441334455Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:29:06.472929 kubelet[2466]: I0515 00:29:06.472889 2466 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 15 00:29:06.511522 systemd[1]: Created slice kubepods-besteffort-pod1c672a4f_dc2d_49d4_92b2_aa1858123efa.slice - libcontainer container kubepods-besteffort-pod1c672a4f_dc2d_49d4_92b2_aa1858123efa.slice. May 15 00:29:06.517663 systemd[1]: Created slice kubepods-burstable-pod936f2675_5caf_4d04_b8fa_7b40aacfa39d.slice - libcontainer container kubepods-burstable-pod936f2675_5caf_4d04_b8fa_7b40aacfa39d.slice. May 15 00:29:06.527603 systemd[1]: Created slice kubepods-burstable-pod4b70c740_ad72_450d_bead_e209924ab516.slice - libcontainer container kubepods-burstable-pod4b70c740_ad72_450d_bead_e209924ab516.slice. May 15 00:29:06.533723 systemd[1]: Created slice kubepods-besteffort-pod23f642bf_ba69_45cb_b737_8658c85d25d9.slice - libcontainer container kubepods-besteffort-pod23f642bf_ba69_45cb_b737_8658c85d25d9.slice. May 15 00:29:06.539833 systemd[1]: Created slice kubepods-besteffort-podcdc568ed_985b_493b_94ee_815dbe88ed76.slice - libcontainer container kubepods-besteffort-podcdc568ed_985b_493b_94ee_815dbe88ed76.slice. May 15 00:29:06.572085 kubelet[2466]: I0515 00:29:06.572015 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggkcm\" (UniqueName: \"kubernetes.io/projected/1c672a4f-dc2d-49d4-92b2-aa1858123efa-kube-api-access-ggkcm\") pod \"calico-apiserver-79d7d69dbd-j887d\" (UID: \"1c672a4f-dc2d-49d4-92b2-aa1858123efa\") " pod="calico-apiserver/calico-apiserver-79d7d69dbd-j887d" May 15 00:29:06.572085 kubelet[2466]: I0515 00:29:06.572080 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1c672a4f-dc2d-49d4-92b2-aa1858123efa-calico-apiserver-certs\") pod \"calico-apiserver-79d7d69dbd-j887d\" (UID: \"1c672a4f-dc2d-49d4-92b2-aa1858123efa\") " pod="calico-apiserver/calico-apiserver-79d7d69dbd-j887d" May 15 00:29:06.672510 kubelet[2466]: I0515 00:29:06.672384 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc58g\" (UniqueName: \"kubernetes.io/projected/23f642bf-ba69-45cb-b737-8658c85d25d9-kube-api-access-nc58g\") pod \"calico-apiserver-79d7d69dbd-qwhnz\" (UID: \"23f642bf-ba69-45cb-b737-8658c85d25d9\") " pod="calico-apiserver/calico-apiserver-79d7d69dbd-qwhnz" May 15 00:29:06.672510 kubelet[2466]: I0515 00:29:06.672428 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdc568ed-985b-493b-94ee-815dbe88ed76-tigera-ca-bundle\") pod \"calico-kube-controllers-84874c75bf-xm54f\" (UID: \"cdc568ed-985b-493b-94ee-815dbe88ed76\") " pod="calico-system/calico-kube-controllers-84874c75bf-xm54f" May 15 00:29:06.672510 kubelet[2466]: I0515 00:29:06.672445 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kbbv\" (UniqueName: \"kubernetes.io/projected/cdc568ed-985b-493b-94ee-815dbe88ed76-kube-api-access-5kbbv\") pod \"calico-kube-controllers-84874c75bf-xm54f\" (UID: \"cdc568ed-985b-493b-94ee-815dbe88ed76\") " pod="calico-system/calico-kube-controllers-84874c75bf-xm54f" May 15 00:29:06.672510 kubelet[2466]: I0515 00:29:06.672464 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/936f2675-5caf-4d04-b8fa-7b40aacfa39d-config-volume\") pod \"coredns-6f6b679f8f-v6wlt\" (UID: \"936f2675-5caf-4d04-b8fa-7b40aacfa39d\") " pod="kube-system/coredns-6f6b679f8f-v6wlt" May 15 00:29:06.672720 kubelet[2466]: I0515 00:29:06.672533 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/23f642bf-ba69-45cb-b737-8658c85d25d9-calico-apiserver-certs\") pod \"calico-apiserver-79d7d69dbd-qwhnz\" (UID: \"23f642bf-ba69-45cb-b737-8658c85d25d9\") " pod="calico-apiserver/calico-apiserver-79d7d69dbd-qwhnz" May 15 00:29:06.672720 kubelet[2466]: I0515 00:29:06.672588 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8qz2\" (UniqueName: \"kubernetes.io/projected/936f2675-5caf-4d04-b8fa-7b40aacfa39d-kube-api-access-c8qz2\") pod \"coredns-6f6b679f8f-v6wlt\" (UID: \"936f2675-5caf-4d04-b8fa-7b40aacfa39d\") " pod="kube-system/coredns-6f6b679f8f-v6wlt" May 15 00:29:06.672720 kubelet[2466]: I0515 00:29:06.672696 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4b70c740-ad72-450d-bead-e209924ab516-config-volume\") pod \"coredns-6f6b679f8f-gtg4n\" (UID: \"4b70c740-ad72-450d-bead-e209924ab516\") " pod="kube-system/coredns-6f6b679f8f-gtg4n" May 15 00:29:06.672811 kubelet[2466]: I0515 00:29:06.672721 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8js98\" (UniqueName: \"kubernetes.io/projected/4b70c740-ad72-450d-bead-e209924ab516-kube-api-access-8js98\") pod \"coredns-6f6b679f8f-gtg4n\" (UID: \"4b70c740-ad72-450d-bead-e209924ab516\") " pod="kube-system/coredns-6f6b679f8f-gtg4n" May 15 00:29:06.817116 containerd[1440]: time="2025-05-15T00:29:06.817069458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d7d69dbd-j887d,Uid:1c672a4f-dc2d-49d4-92b2-aa1858123efa,Namespace:calico-apiserver,Attempt:0,}" May 15 00:29:06.821521 kubelet[2466]: E0515 00:29:06.821467 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:06.822491 containerd[1440]: time="2025-05-15T00:29:06.822292788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v6wlt,Uid:936f2675-5caf-4d04-b8fa-7b40aacfa39d,Namespace:kube-system,Attempt:0,}" May 15 00:29:06.831063 kubelet[2466]: E0515 00:29:06.830195 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:06.831800 containerd[1440]: time="2025-05-15T00:29:06.831717885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gtg4n,Uid:4b70c740-ad72-450d-bead-e209924ab516,Namespace:kube-system,Attempt:0,}" May 15 00:29:06.837970 containerd[1440]: time="2025-05-15T00:29:06.837929456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d7d69dbd-qwhnz,Uid:23f642bf-ba69-45cb-b737-8658c85d25d9,Namespace:calico-apiserver,Attempt:0,}" May 15 00:29:06.842899 containerd[1440]: time="2025-05-15T00:29:06.842857425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84874c75bf-xm54f,Uid:cdc568ed-985b-493b-94ee-815dbe88ed76,Namespace:calico-system,Attempt:0,}" May 15 00:29:06.924903 systemd[1]: Created slice kubepods-besteffort-podf966a81f_0997_40a6_9fea_29e50dec8072.slice - libcontainer container kubepods-besteffort-podf966a81f_0997_40a6_9fea_29e50dec8072.slice. May 15 00:29:06.927836 containerd[1440]: time="2025-05-15T00:29:06.927647779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-64gbz,Uid:f966a81f-0997-40a6-9fea-29e50dec8072,Namespace:calico-system,Attempt:0,}" May 15 00:29:07.018189 kubelet[2466]: E0515 00:29:07.015308 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:07.023630 containerd[1440]: time="2025-05-15T00:29:07.023272390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 00:29:07.278430 containerd[1440]: time="2025-05-15T00:29:07.278299705Z" level=error msg="Failed to destroy network for sandbox \"7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:29:07.278940 containerd[1440]: time="2025-05-15T00:29:07.278891866Z" level=error msg="encountered an error cleaning up failed sandbox \"7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:29:07.279004 containerd[1440]: time="2025-05-15T00:29:07.278957346Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-64gbz,Uid:f966a81f-0997-40a6-9fea-29e50dec8072,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:29:07.282892 kubelet[2466]: E0515 00:29:07.282825 2466 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:29:07.283053 kubelet[2466]: E0515 00:29:07.282921 2466 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-64gbz" May 15 00:29:07.283053 kubelet[2466]: E0515 00:29:07.282941 2466 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-64gbz" May 15 00:29:07.283053 kubelet[2466]: E0515 00:29:07.282990 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-64gbz_calico-system(f966a81f-0997-40a6-9fea-29e50dec8072)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-64gbz_calico-system(f966a81f-0997-40a6-9fea-29e50dec8072)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-64gbz" podUID="f966a81f-0997-40a6-9fea-29e50dec8072" May 15 00:29:07.283403 containerd[1440]: time="2025-05-15T00:29:07.283349514Z" level=error msg="Failed to destroy network for sandbox \"a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:29:07.284335 containerd[1440]: time="2025-05-15T00:29:07.284291155Z" level=error msg="encountered an error cleaning up failed sandbox \"a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:29:07.284619 containerd[1440]: time="2025-05-15T00:29:07.284584436Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gtg4n,Uid:4b70c740-ad72-450d-bead-e209924ab516,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:29:07.284851 kubelet[2466]: E0515 00:29:07.284816 2466 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:29:07.284912 kubelet[2466]: E0515 00:29:07.284871 2466 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-gtg4n" May 15 00:29:07.285003 kubelet[2466]: E0515 00:29:07.284912 2466 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-gtg4n" May 15 00:29:07.285003 kubelet[2466]: E0515 00:29:07.284949 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-gtg4n_kube-system(4b70c740-ad72-450d-bead-e209924ab516)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-gtg4n_kube-system(4b70c740-ad72-450d-bead-e209924ab516)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-gtg4n" podUID="4b70c740-ad72-450d-bead-e209924ab516" May 15 00:29:07.292219 containerd[1440]: time="2025-05-15T00:29:07.292170329Z" level=error msg="Failed to destroy network for sandbox \"7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:29:07.293192 containerd[1440]: time="2025-05-15T00:29:07.293118450Z" level=error msg="encountered an error cleaning up failed sandbox \"7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:29:07.293389 containerd[1440]: time="2025-05-15T00:29:07.293178570Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v6wlt,Uid:936f2675-5caf-4d04-b8fa-7b40aacfa39d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:29:07.293778 kubelet[2466]: E0515 00:29:07.293706 2466 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:29:07.293778 kubelet[2466]: E0515 00:29:07.293759 2466 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-v6wlt" May 15 00:29:07.293880 kubelet[2466]: E0515 00:29:07.293794 2466 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-v6wlt" May 15 00:29:07.293880 kubelet[2466]: E0515 00:29:07.293837 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-v6wlt_kube-system(936f2675-5caf-4d04-b8fa-7b40aacfa39d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-v6wlt_kube-system(936f2675-5caf-4d04-b8fa-7b40aacfa39d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-v6wlt" podUID="936f2675-5caf-4d04-b8fa-7b40aacfa39d" May 15 00:29:07.299458 containerd[1440]: time="2025-05-15T00:29:07.299390021Z" level=error msg="Failed to destroy network for sandbox \"b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:29:07.299803 containerd[1440]: time="2025-05-15T00:29:07.299527861Z" level=error msg="Failed to destroy network for sandbox \"2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:29:07.299884 containerd[1440]: time="2025-05-15T00:29:07.299798742Z" level=error msg="encountered an error cleaning up failed sandbox \"b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:29:07.299884 containerd[1440]: time="2025-05-15T00:29:07.299840582Z" level=error msg="encountered an error cleaning up failed sandbox \"2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:29:07.299943 containerd[1440]: time="2025-05-15T00:29:07.299880582Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d7d69dbd-qwhnz,Uid:23f642bf-ba69-45cb-b737-8658c85d25d9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:29:07.299995 containerd[1440]: time="2025-05-15T00:29:07.299844302Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84874c75bf-xm54f,Uid:cdc568ed-985b-493b-94ee-815dbe88ed76,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:29:07.300132 kubelet[2466]: E0515 00:29:07.300085 2466 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:29:07.300132 kubelet[2466]: E0515 00:29:07.300098 2466 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:29:07.300540 kubelet[2466]: E0515 00:29:07.300499 2466 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84874c75bf-xm54f" May 15 00:29:07.300575 kubelet[2466]: E0515 00:29:07.300537 2466 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84874c75bf-xm54f" May 15 00:29:07.300619 kubelet[2466]: E0515 00:29:07.300589 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84874c75bf-xm54f_calico-system(cdc568ed-985b-493b-94ee-815dbe88ed76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84874c75bf-xm54f_calico-system(cdc568ed-985b-493b-94ee-815dbe88ed76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84874c75bf-xm54f" podUID="cdc568ed-985b-493b-94ee-815dbe88ed76" May 15 00:29:07.300926 kubelet[2466]: E0515 00:29:07.300145 2466 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79d7d69dbd-qwhnz" May 15 00:29:07.300981 kubelet[2466]: E0515 00:29:07.300928 2466 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79d7d69dbd-qwhnz" May 15 00:29:07.301008 kubelet[2466]: E0515 00:29:07.300976 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79d7d69dbd-qwhnz_calico-apiserver(23f642bf-ba69-45cb-b737-8658c85d25d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79d7d69dbd-qwhnz_calico-apiserver(23f642bf-ba69-45cb-b737-8658c85d25d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79d7d69dbd-qwhnz" podUID="23f642bf-ba69-45cb-b737-8658c85d25d9" May 15 00:29:07.303954 containerd[1440]: time="2025-05-15T00:29:07.303913829Z" level=error msg="Failed to destroy network for sandbox \"b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:29:07.304260 containerd[1440]: time="2025-05-15T00:29:07.304234949Z" level=error msg="encountered an error cleaning up failed sandbox \"b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:29:07.304328 containerd[1440]: time="2025-05-15T00:29:07.304290109Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d7d69dbd-j887d,Uid:1c672a4f-dc2d-49d4-92b2-aa1858123efa,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:29:07.304667 kubelet[2466]: E0515 00:29:07.304632 2466 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:29:07.304720 kubelet[2466]: E0515 00:29:07.304687 2466 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79d7d69dbd-j887d" May 15 00:29:07.304720 kubelet[2466]: E0515 00:29:07.304706 2466 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79d7d69dbd-j887d" May 15 00:29:07.304922 kubelet[2466]: E0515 00:29:07.304750 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79d7d69dbd-j887d_calico-apiserver(1c672a4f-dc2d-49d4-92b2-aa1858123efa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79d7d69dbd-j887d_calico-apiserver(1c672a4f-dc2d-49d4-92b2-aa1858123efa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79d7d69dbd-j887d" podUID="1c672a4f-dc2d-49d4-92b2-aa1858123efa" May 15 00:29:08.017304 kubelet[2466]: I0515 00:29:08.017261 2466 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" May 15 00:29:08.018442 containerd[1440]: time="2025-05-15T00:29:08.018011764Z" level=info msg="StopPodSandbox for \"2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2\"" May 15 00:29:08.018442 containerd[1440]: time="2025-05-15T00:29:08.018201044Z" level=info msg="Ensure that sandbox 2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2 in task-service has been cleanup successfully" May 15 00:29:08.020311 kubelet[2466]: I0515 00:29:08.020241 2466 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" May 15 00:29:08.021016 containerd[1440]: time="2025-05-15T00:29:08.020989368Z" level=info msg="StopPodSandbox for \"b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c\"" May 15 00:29:08.021307 containerd[1440]: time="2025-05-15T00:29:08.021284249Z" level=info msg="Ensure that sandbox b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c in task-service has been cleanup successfully" May 15 00:29:08.024023 kubelet[2466]: I0515 00:29:08.023995 2466 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" May 15 00:29:08.025972 containerd[1440]: time="2025-05-15T00:29:08.025166975Z" level=info msg="StopPodSandbox for \"a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635\"" May 15 00:29:08.026398 containerd[1440]: time="2025-05-15T00:29:08.026113537Z" level=info msg="Ensure that sandbox a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635 in task-service has been cleanup successfully" May 15 00:29:08.026449 kubelet[2466]: I0515 00:29:08.026416 2466 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" May 15 00:29:08.028027 kubelet[2466]: I0515 00:29:08.027980 2466 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" May 15 00:29:08.028876 containerd[1440]: time="2025-05-15T00:29:08.028841141Z" level=info msg="StopPodSandbox for \"7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9\"" May 15 00:29:08.029121 containerd[1440]: time="2025-05-15T00:29:08.029081581Z" level=info msg="StopPodSandbox for \"b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0\"" May 15 00:29:08.029272 containerd[1440]: time="2025-05-15T00:29:08.029238982Z" level=info msg="Ensure that sandbox 7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9 in task-service has been cleanup successfully" May 15 00:29:08.030024 containerd[1440]: time="2025-05-15T00:29:08.029982543Z" level=info msg="Ensure that sandbox b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0 in task-service has been cleanup successfully" May 15 00:29:08.031004 kubelet[2466]: I0515 00:29:08.030975 2466 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" May 15 00:29:08.032086 containerd[1440]: time="2025-05-15T00:29:08.032037906Z" level=info msg="StopPodSandbox for \"7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d\"" May 15 00:29:08.032231 containerd[1440]: time="2025-05-15T00:29:08.032203546Z" level=info msg="Ensure that sandbox 7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d in task-service has been cleanup successfully" May 15 00:29:08.082320 containerd[1440]: time="2025-05-15T00:29:08.081444225Z" level=error msg="StopPodSandbox for \"b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c\" failed" error="failed to destroy network for sandbox \"b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:29:08.082568 kubelet[2466]: E0515 00:29:08.082252 2466 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" May 15 00:29:08.082568 kubelet[2466]: E0515 00:29:08.082308 2466 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c"} May 15 00:29:08.082568 kubelet[2466]: E0515 00:29:08.082388 2466 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1c672a4f-dc2d-49d4-92b2-aa1858123efa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:29:08.082568 kubelet[2466]: E0515 00:29:08.082410 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1c672a4f-dc2d-49d4-92b2-aa1858123efa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79d7d69dbd-j887d" podUID="1c672a4f-dc2d-49d4-92b2-aa1858123efa" May 15 00:29:08.084961 kubelet[2466]: E0515 00:29:08.082803 2466 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" May 15 00:29:08.084961 kubelet[2466]: E0515 00:29:08.082831 2466 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635"} May 15 00:29:08.084961 kubelet[2466]: E0515 00:29:08.082882 2466 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4b70c740-ad72-450d-bead-e209924ab516\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:29:08.084961 kubelet[2466]: E0515 00:29:08.082902 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4b70c740-ad72-450d-bead-e209924ab516\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-gtg4n" podUID="4b70c740-ad72-450d-bead-e209924ab516" May 15 00:29:08.086301 containerd[1440]: time="2025-05-15T00:29:08.082603587Z" level=error msg="StopPodSandbox for \"a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635\" failed" error="failed to destroy network for sandbox \"a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:29:08.086301 containerd[1440]: time="2025-05-15T00:29:08.083916749Z" level=error msg="StopPodSandbox for \"7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d\" failed" error="failed to destroy network for sandbox \"7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:29:08.086383 kubelet[2466]: E0515 00:29:08.084124 2466 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" May 15 00:29:08.086383 kubelet[2466]: E0515 00:29:08.084212 2466 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d"} May 15 00:29:08.086383 kubelet[2466]: E0515 00:29:08.084253 2466 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f966a81f-0997-40a6-9fea-29e50dec8072\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:29:08.086383 kubelet[2466]: E0515 00:29:08.084273 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f966a81f-0997-40a6-9fea-29e50dec8072\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-64gbz" podUID="f966a81f-0997-40a6-9fea-29e50dec8072" May 15 00:29:08.090369 containerd[1440]: time="2025-05-15T00:29:08.089873478Z" level=error msg="StopPodSandbox for \"b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0\" failed" error="failed to destroy network for sandbox \"b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:29:08.090458 kubelet[2466]: E0515 00:29:08.090200 2466 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" May 15 00:29:08.090458 kubelet[2466]: E0515 00:29:08.090259 2466 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0"} May 15 00:29:08.090458 kubelet[2466]: E0515 00:29:08.090293 2466 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cdc568ed-985b-493b-94ee-815dbe88ed76\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:29:08.090458 kubelet[2466]: E0515 00:29:08.090402 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cdc568ed-985b-493b-94ee-815dbe88ed76\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84874c75bf-xm54f" podUID="cdc568ed-985b-493b-94ee-815dbe88ed76" May 15 00:29:08.094413 containerd[1440]: time="2025-05-15T00:29:08.094295805Z" level=error msg="StopPodSandbox for \"2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2\" failed" error="failed to destroy network for sandbox \"2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:29:08.094545 kubelet[2466]: E0515 00:29:08.094501 2466 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" May 15 00:29:08.094577 kubelet[2466]: E0515 00:29:08.094557 2466 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2"} May 15 00:29:08.094609 kubelet[2466]: E0515 00:29:08.094587 2466 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"23f642bf-ba69-45cb-b737-8658c85d25d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:29:08.094653 kubelet[2466]: E0515 00:29:08.094607 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"23f642bf-ba69-45cb-b737-8658c85d25d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79d7d69dbd-qwhnz" podUID="23f642bf-ba69-45cb-b737-8658c85d25d9" May 15 00:29:08.094687 containerd[1440]: time="2025-05-15T00:29:08.094299885Z" level=error msg="StopPodSandbox for \"7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9\" failed" error="failed to destroy network for sandbox \"7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:29:08.094902 kubelet[2466]: E0515 00:29:08.094873 2466 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" May 15 00:29:08.094942 kubelet[2466]: E0515 00:29:08.094930 2466 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9"} May 15 00:29:08.094973 kubelet[2466]: E0515 00:29:08.094954 2466 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"936f2675-5caf-4d04-b8fa-7b40aacfa39d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:29:08.095008 kubelet[2466]: E0515 00:29:08.094974 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"936f2675-5caf-4d04-b8fa-7b40aacfa39d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-v6wlt" podUID="936f2675-5caf-4d04-b8fa-7b40aacfa39d" May 15 00:29:10.994119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4131087234.mount: Deactivated successfully. May 15 00:29:11.149678 containerd[1440]: time="2025-05-15T00:29:11.149618951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:29:11.150162 containerd[1440]: time="2025-05-15T00:29:11.150119632Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 15 00:29:11.150876 containerd[1440]: time="2025-05-15T00:29:11.150841193Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:29:11.153537 containerd[1440]: time="2025-05-15T00:29:11.153496516Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:29:11.154161 containerd[1440]: time="2025-05-15T00:29:11.154130917Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 4.130812926s" May 15 00:29:11.154197 containerd[1440]: time="2025-05-15T00:29:11.154164637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 15 00:29:11.166111 containerd[1440]: time="2025-05-15T00:29:11.165960692Z" level=info msg="CreateContainer within sandbox \"0b05f905e7952b4d64789137022d620b68803f6186fd0bc048b4e50883d14aa3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 15 00:29:11.193285 containerd[1440]: time="2025-05-15T00:29:11.193154568Z" level=info msg="CreateContainer within sandbox \"0b05f905e7952b4d64789137022d620b68803f6186fd0bc048b4e50883d14aa3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"67666e9ae645798b5f00c85625f280a6a474cc133b58be71d04b040656f20491\"" May 15 00:29:11.194133 containerd[1440]: time="2025-05-15T00:29:11.193890009Z" level=info msg="StartContainer for \"67666e9ae645798b5f00c85625f280a6a474cc133b58be71d04b040656f20491\"" May 15 00:29:11.245936 systemd[1]: Started cri-containerd-67666e9ae645798b5f00c85625f280a6a474cc133b58be71d04b040656f20491.scope - libcontainer container 67666e9ae645798b5f00c85625f280a6a474cc133b58be71d04b040656f20491. May 15 00:29:11.277479 containerd[1440]: time="2025-05-15T00:29:11.277437839Z" level=info msg="StartContainer for \"67666e9ae645798b5f00c85625f280a6a474cc133b58be71d04b040656f20491\" returns successfully" May 15 00:29:11.531487 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 15 00:29:11.531659 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 15 00:29:12.041288 kubelet[2466]: E0515 00:29:12.041036 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:12.056811 kubelet[2466]: I0515 00:29:12.054711 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-p48lx" podStartSLOduration=1.694506946 podStartE2EDuration="17.054695578s" podCreationTimestamp="2025-05-15 00:28:55 +0000 UTC" firstStartedPulling="2025-05-15 00:28:55.794837486 +0000 UTC m=+11.963645884" lastFinishedPulling="2025-05-15 00:29:11.155026158 +0000 UTC m=+27.323834516" observedRunningTime="2025-05-15 00:29:12.054183617 +0000 UTC m=+28.222992015" watchObservedRunningTime="2025-05-15 00:29:12.054695578 +0000 UTC m=+28.223503976" May 15 00:29:13.042674 kubelet[2466]: I0515 00:29:13.042455 2466 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:29:13.043037 kubelet[2466]: E0515 00:29:13.042872 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:14.044826 kubelet[2466]: E0515 00:29:14.044280 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:14.715440 systemd[1]: Started sshd@7-10.0.0.112:22-10.0.0.1:50456.service - OpenSSH per-connection server daemon (10.0.0.1:50456). May 15 00:29:14.763370 sshd[3896]: Accepted publickey for core from 10.0.0.1 port 50456 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:29:14.765002 sshd[3896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:29:14.768719 systemd-logind[1420]: New session 8 of user core. May 15 00:29:14.775938 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 00:29:14.933869 sshd[3896]: pam_unix(sshd:session): session closed for user core May 15 00:29:14.937369 systemd[1]: sshd@7-10.0.0.112:22-10.0.0.1:50456.service: Deactivated successfully. May 15 00:29:14.939054 systemd[1]: session-8.scope: Deactivated successfully. May 15 00:29:14.939746 systemd-logind[1420]: Session 8 logged out. Waiting for processes to exit. May 15 00:29:14.940672 systemd-logind[1420]: Removed session 8. May 15 00:29:18.073288 kubelet[2466]: I0515 00:29:18.073250 2466 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:29:18.074367 kubelet[2466]: E0515 00:29:18.073591 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:18.916504 containerd[1440]: time="2025-05-15T00:29:18.911622414Z" level=info msg="StopPodSandbox for \"7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9\"" May 15 00:29:18.916504 containerd[1440]: time="2025-05-15T00:29:18.911655814Z" level=info msg="StopPodSandbox for \"7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d\"" May 15 00:29:19.057984 kubelet[2466]: E0515 00:29:19.057930 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:19.150571 containerd[1440]: 2025-05-15 00:29:19.012 [INFO][4059] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" May 15 00:29:19.150571 containerd[1440]: 2025-05-15 00:29:19.012 [INFO][4059] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" iface="eth0" netns="/var/run/netns/cni-d2c11092-bd39-7db0-1b0e-021aa51abcd4" May 15 00:29:19.150571 containerd[1440]: 2025-05-15 00:29:19.015 [INFO][4059] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" iface="eth0" netns="/var/run/netns/cni-d2c11092-bd39-7db0-1b0e-021aa51abcd4" May 15 00:29:19.150571 containerd[1440]: 2025-05-15 00:29:19.020 [INFO][4059] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" iface="eth0" netns="/var/run/netns/cni-d2c11092-bd39-7db0-1b0e-021aa51abcd4" May 15 00:29:19.150571 containerd[1440]: 2025-05-15 00:29:19.020 [INFO][4059] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" May 15 00:29:19.150571 containerd[1440]: 2025-05-15 00:29:19.020 [INFO][4059] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" May 15 00:29:19.150571 containerd[1440]: 2025-05-15 00:29:19.130 [INFO][4075] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" HandleID="k8s-pod-network.7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" Workload="localhost-k8s-coredns--6f6b679f8f--v6wlt-eth0" May 15 00:29:19.150571 containerd[1440]: 2025-05-15 00:29:19.130 [INFO][4075] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:29:19.150571 containerd[1440]: 2025-05-15 00:29:19.130 [INFO][4075] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:29:19.150571 containerd[1440]: 2025-05-15 00:29:19.145 [WARNING][4075] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" HandleID="k8s-pod-network.7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" Workload="localhost-k8s-coredns--6f6b679f8f--v6wlt-eth0" May 15 00:29:19.150571 containerd[1440]: 2025-05-15 00:29:19.145 [INFO][4075] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" HandleID="k8s-pod-network.7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" Workload="localhost-k8s-coredns--6f6b679f8f--v6wlt-eth0" May 15 00:29:19.150571 containerd[1440]: 2025-05-15 00:29:19.147 [INFO][4075] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:29:19.150571 containerd[1440]: 2025-05-15 00:29:19.149 [INFO][4059] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" May 15 00:29:19.150979 containerd[1440]: time="2025-05-15T00:29:19.150727126Z" level=info msg="TearDown network for sandbox \"7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9\" successfully" May 15 00:29:19.150979 containerd[1440]: time="2025-05-15T00:29:19.150802606Z" level=info msg="StopPodSandbox for \"7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9\" returns successfully" May 15 00:29:19.151401 kubelet[2466]: E0515 00:29:19.151269 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:19.152184 containerd[1440]: time="2025-05-15T00:29:19.152138967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v6wlt,Uid:936f2675-5caf-4d04-b8fa-7b40aacfa39d,Namespace:kube-system,Attempt:1,}" May 15 00:29:19.155948 systemd[1]: run-netns-cni\x2dd2c11092\x2dbd39\x2d7db0\x2d1b0e\x2d021aa51abcd4.mount: Deactivated successfully. May 15 00:29:19.167522 containerd[1440]: 2025-05-15 00:29:19.011 [INFO][4058] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" May 15 00:29:19.167522 containerd[1440]: 2025-05-15 00:29:19.011 [INFO][4058] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" iface="eth0" netns="/var/run/netns/cni-91dfe5ce-ac04-af69-5d42-374e9d74b7fd" May 15 00:29:19.167522 containerd[1440]: 2025-05-15 00:29:19.015 [INFO][4058] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" iface="eth0" netns="/var/run/netns/cni-91dfe5ce-ac04-af69-5d42-374e9d74b7fd" May 15 00:29:19.167522 containerd[1440]: 2025-05-15 00:29:19.020 [INFO][4058] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" iface="eth0" netns="/var/run/netns/cni-91dfe5ce-ac04-af69-5d42-374e9d74b7fd" May 15 00:29:19.167522 containerd[1440]: 2025-05-15 00:29:19.020 [INFO][4058] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" May 15 00:29:19.167522 containerd[1440]: 2025-05-15 00:29:19.020 [INFO][4058] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" May 15 00:29:19.167522 containerd[1440]: 2025-05-15 00:29:19.130 [INFO][4074] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" HandleID="k8s-pod-network.7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" Workload="localhost-k8s-csi--node--driver--64gbz-eth0" May 15 00:29:19.167522 containerd[1440]: 2025-05-15 00:29:19.130 [INFO][4074] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:29:19.167522 containerd[1440]: 2025-05-15 00:29:19.147 [INFO][4074] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:29:19.167522 containerd[1440]: 2025-05-15 00:29:19.162 [WARNING][4074] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" HandleID="k8s-pod-network.7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" Workload="localhost-k8s-csi--node--driver--64gbz-eth0" May 15 00:29:19.167522 containerd[1440]: 2025-05-15 00:29:19.162 [INFO][4074] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" HandleID="k8s-pod-network.7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" Workload="localhost-k8s-csi--node--driver--64gbz-eth0" May 15 00:29:19.167522 containerd[1440]: 2025-05-15 00:29:19.163 [INFO][4074] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:29:19.167522 containerd[1440]: 2025-05-15 00:29:19.165 [INFO][4058] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" May 15 00:29:19.168582 containerd[1440]: time="2025-05-15T00:29:19.167598219Z" level=info msg="TearDown network for sandbox \"7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d\" successfully" May 15 00:29:19.168582 containerd[1440]: time="2025-05-15T00:29:19.167624699Z" level=info msg="StopPodSandbox for \"7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d\" returns successfully" May 15 00:29:19.168582 containerd[1440]: time="2025-05-15T00:29:19.168379980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-64gbz,Uid:f966a81f-0997-40a6-9fea-29e50dec8072,Namespace:calico-system,Attempt:1,}" May 15 00:29:19.170192 systemd[1]: run-netns-cni\x2d91dfe5ce\x2dac04\x2daf69\x2d5d42\x2d374e9d74b7fd.mount: Deactivated successfully. May 15 00:29:19.236250 kernel: bpftool[4132]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 15 00:29:19.365361 systemd-networkd[1376]: calif9079e85f98: Link UP May 15 00:29:19.367647 systemd-networkd[1376]: calif9079e85f98: Gained carrier May 15 00:29:19.384256 containerd[1440]: 2025-05-15 00:29:19.238 [INFO][4098] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--64gbz-eth0 csi-node-driver- calico-system f966a81f-0997-40a6-9fea-29e50dec8072 810 0 2025-05-15 00:28:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-64gbz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif9079e85f98 [] []}} ContainerID="b7e2a2883c0b8d477d8432a7953a6614748a6f8a23e89905386636291dd552d6" Namespace="calico-system" Pod="csi-node-driver-64gbz" WorkloadEndpoint="localhost-k8s-csi--node--driver--64gbz-" May 15 00:29:19.384256 containerd[1440]: 2025-05-15 00:29:19.238 [INFO][4098] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b7e2a2883c0b8d477d8432a7953a6614748a6f8a23e89905386636291dd552d6" Namespace="calico-system" Pod="csi-node-driver-64gbz" WorkloadEndpoint="localhost-k8s-csi--node--driver--64gbz-eth0" May 15 00:29:19.384256 containerd[1440]: 2025-05-15 00:29:19.283 [INFO][4135] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7e2a2883c0b8d477d8432a7953a6614748a6f8a23e89905386636291dd552d6" HandleID="k8s-pod-network.b7e2a2883c0b8d477d8432a7953a6614748a6f8a23e89905386636291dd552d6" Workload="localhost-k8s-csi--node--driver--64gbz-eth0" May 15 00:29:19.384256 containerd[1440]: 2025-05-15 00:29:19.298 [INFO][4135] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b7e2a2883c0b8d477d8432a7953a6614748a6f8a23e89905386636291dd552d6" HandleID="k8s-pod-network.b7e2a2883c0b8d477d8432a7953a6614748a6f8a23e89905386636291dd552d6" Workload="localhost-k8s-csi--node--driver--64gbz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c7b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-64gbz", "timestamp":"2025-05-15 00:29:19.28335251 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:29:19.384256 containerd[1440]: 2025-05-15 00:29:19.298 [INFO][4135] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:29:19.384256 containerd[1440]: 2025-05-15 00:29:19.298 [INFO][4135] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:29:19.384256 containerd[1440]: 2025-05-15 00:29:19.300 [INFO][4135] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 00:29:19.384256 containerd[1440]: 2025-05-15 00:29:19.306 [INFO][4135] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b7e2a2883c0b8d477d8432a7953a6614748a6f8a23e89905386636291dd552d6" host="localhost" May 15 00:29:19.384256 containerd[1440]: 2025-05-15 00:29:19.328 [INFO][4135] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 00:29:19.384256 containerd[1440]: 2025-05-15 00:29:19.332 [INFO][4135] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 00:29:19.384256 containerd[1440]: 2025-05-15 00:29:19.334 [INFO][4135] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 00:29:19.384256 containerd[1440]: 2025-05-15 00:29:19.336 [INFO][4135] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 00:29:19.384256 containerd[1440]: 2025-05-15 00:29:19.336 [INFO][4135] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b7e2a2883c0b8d477d8432a7953a6614748a6f8a23e89905386636291dd552d6" host="localhost" May 15 00:29:19.384256 containerd[1440]: 2025-05-15 00:29:19.337 [INFO][4135] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b7e2a2883c0b8d477d8432a7953a6614748a6f8a23e89905386636291dd552d6 May 15 00:29:19.384256 containerd[1440]: 2025-05-15 00:29:19.342 [INFO][4135] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b7e2a2883c0b8d477d8432a7953a6614748a6f8a23e89905386636291dd552d6" host="localhost" May 15 00:29:19.384256 containerd[1440]: 2025-05-15 00:29:19.348 [INFO][4135] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b7e2a2883c0b8d477d8432a7953a6614748a6f8a23e89905386636291dd552d6" host="localhost" May 15 00:29:19.384256 containerd[1440]: 2025-05-15 00:29:19.348 [INFO][4135] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b7e2a2883c0b8d477d8432a7953a6614748a6f8a23e89905386636291dd552d6" host="localhost" May 15 00:29:19.384256 containerd[1440]: 2025-05-15 00:29:19.349 [INFO][4135] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:29:19.384256 containerd[1440]: 2025-05-15 00:29:19.349 [INFO][4135] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b7e2a2883c0b8d477d8432a7953a6614748a6f8a23e89905386636291dd552d6" HandleID="k8s-pod-network.b7e2a2883c0b8d477d8432a7953a6614748a6f8a23e89905386636291dd552d6" Workload="localhost-k8s-csi--node--driver--64gbz-eth0" May 15 00:29:19.384905 containerd[1440]: 2025-05-15 00:29:19.351 [INFO][4098] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b7e2a2883c0b8d477d8432a7953a6614748a6f8a23e89905386636291dd552d6" Namespace="calico-system" Pod="csi-node-driver-64gbz" WorkloadEndpoint="localhost-k8s-csi--node--driver--64gbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--64gbz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f966a81f-0997-40a6-9fea-29e50dec8072", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 28, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-64gbz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif9079e85f98", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:29:19.384905 containerd[1440]: 2025-05-15 00:29:19.352 [INFO][4098] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b7e2a2883c0b8d477d8432a7953a6614748a6f8a23e89905386636291dd552d6" Namespace="calico-system" Pod="csi-node-driver-64gbz" WorkloadEndpoint="localhost-k8s-csi--node--driver--64gbz-eth0" May 15 00:29:19.384905 containerd[1440]: 2025-05-15 00:29:19.352 [INFO][4098] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif9079e85f98 ContainerID="b7e2a2883c0b8d477d8432a7953a6614748a6f8a23e89905386636291dd552d6" Namespace="calico-system" Pod="csi-node-driver-64gbz" WorkloadEndpoint="localhost-k8s-csi--node--driver--64gbz-eth0" May 15 00:29:19.384905 containerd[1440]: 2025-05-15 00:29:19.368 [INFO][4098] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7e2a2883c0b8d477d8432a7953a6614748a6f8a23e89905386636291dd552d6" Namespace="calico-system" Pod="csi-node-driver-64gbz" WorkloadEndpoint="localhost-k8s-csi--node--driver--64gbz-eth0" May 15 00:29:19.384905 containerd[1440]: 2025-05-15 00:29:19.370 [INFO][4098] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b7e2a2883c0b8d477d8432a7953a6614748a6f8a23e89905386636291dd552d6" Namespace="calico-system" Pod="csi-node-driver-64gbz" WorkloadEndpoint="localhost-k8s-csi--node--driver--64gbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--64gbz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f966a81f-0997-40a6-9fea-29e50dec8072", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 28, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b7e2a2883c0b8d477d8432a7953a6614748a6f8a23e89905386636291dd552d6", Pod:"csi-node-driver-64gbz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif9079e85f98", MAC:"62:14:44:36:c8:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:29:19.384905 containerd[1440]: 2025-05-15 00:29:19.382 [INFO][4098] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b7e2a2883c0b8d477d8432a7953a6614748a6f8a23e89905386636291dd552d6" Namespace="calico-system" Pod="csi-node-driver-64gbz" WorkloadEndpoint="localhost-k8s-csi--node--driver--64gbz-eth0" May 15 00:29:19.401820 systemd-networkd[1376]: vxlan.calico: Link UP May 15 00:29:19.401829 systemd-networkd[1376]: vxlan.calico: Gained carrier May 15 00:29:19.410452 containerd[1440]: time="2025-05-15T00:29:19.410263690Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:29:19.410452 containerd[1440]: time="2025-05-15T00:29:19.410321570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:29:19.410452 containerd[1440]: time="2025-05-15T00:29:19.410353810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:29:19.410452 containerd[1440]: time="2025-05-15T00:29:19.410443090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:29:19.434968 systemd[1]: Started cri-containerd-b7e2a2883c0b8d477d8432a7953a6614748a6f8a23e89905386636291dd552d6.scope - libcontainer container b7e2a2883c0b8d477d8432a7953a6614748a6f8a23e89905386636291dd552d6. May 15 00:29:19.454428 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:29:19.477414 containerd[1440]: time="2025-05-15T00:29:19.477343983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-64gbz,Uid:f966a81f-0997-40a6-9fea-29e50dec8072,Namespace:calico-system,Attempt:1,} returns sandbox id \"b7e2a2883c0b8d477d8432a7953a6614748a6f8a23e89905386636291dd552d6\"" May 15 00:29:19.479004 systemd-networkd[1376]: cali9e88a07bd86: Link UP May 15 00:29:19.479761 systemd-networkd[1376]: cali9e88a07bd86: Gained carrier May 15 00:29:19.479879 containerd[1440]: time="2025-05-15T00:29:19.479846304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 15 00:29:19.494006 containerd[1440]: 2025-05-15 00:29:19.263 [INFO][4115] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--v6wlt-eth0 coredns-6f6b679f8f- kube-system 936f2675-5caf-4d04-b8fa-7b40aacfa39d 811 0 2025-05-15 00:28:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-v6wlt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9e88a07bd86 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c6e31ae61a8823cafe947fa86259fedaab20e9f65fdcc5ab0062a65a28a36ce9" Namespace="kube-system" Pod="coredns-6f6b679f8f-v6wlt" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--v6wlt-" May 15 00:29:19.494006 containerd[1440]: 2025-05-15 00:29:19.263 [INFO][4115] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c6e31ae61a8823cafe947fa86259fedaab20e9f65fdcc5ab0062a65a28a36ce9" Namespace="kube-system" Pod="coredns-6f6b679f8f-v6wlt" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--v6wlt-eth0" May 15 00:29:19.494006 containerd[1440]: 2025-05-15 00:29:19.303 [INFO][4142] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6e31ae61a8823cafe947fa86259fedaab20e9f65fdcc5ab0062a65a28a36ce9" HandleID="k8s-pod-network.c6e31ae61a8823cafe947fa86259fedaab20e9f65fdcc5ab0062a65a28a36ce9" Workload="localhost-k8s-coredns--6f6b679f8f--v6wlt-eth0" May 15 00:29:19.494006 containerd[1440]: 2025-05-15 00:29:19.325 [INFO][4142] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c6e31ae61a8823cafe947fa86259fedaab20e9f65fdcc5ab0062a65a28a36ce9" HandleID="k8s-pod-network.c6e31ae61a8823cafe947fa86259fedaab20e9f65fdcc5ab0062a65a28a36ce9" Workload="localhost-k8s-coredns--6f6b679f8f--v6wlt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003aa050), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-v6wlt", "timestamp":"2025-05-15 00:29:19.303178966 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:29:19.494006 containerd[1440]: 2025-05-15 00:29:19.325 [INFO][4142] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:29:19.494006 containerd[1440]: 2025-05-15 00:29:19.349 [INFO][4142] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:29:19.494006 containerd[1440]: 2025-05-15 00:29:19.349 [INFO][4142] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 00:29:19.494006 containerd[1440]: 2025-05-15 00:29:19.406 [INFO][4142] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c6e31ae61a8823cafe947fa86259fedaab20e9f65fdcc5ab0062a65a28a36ce9" host="localhost" May 15 00:29:19.494006 containerd[1440]: 2025-05-15 00:29:19.414 [INFO][4142] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 00:29:19.494006 containerd[1440]: 2025-05-15 00:29:19.440 [INFO][4142] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 00:29:19.494006 containerd[1440]: 2025-05-15 00:29:19.445 [INFO][4142] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 00:29:19.494006 containerd[1440]: 2025-05-15 00:29:19.450 [INFO][4142] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 00:29:19.494006 containerd[1440]: 2025-05-15 00:29:19.450 [INFO][4142] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c6e31ae61a8823cafe947fa86259fedaab20e9f65fdcc5ab0062a65a28a36ce9" host="localhost" May 15 00:29:19.494006 containerd[1440]: 2025-05-15 00:29:19.454 [INFO][4142] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c6e31ae61a8823cafe947fa86259fedaab20e9f65fdcc5ab0062a65a28a36ce9 May 15 00:29:19.494006 containerd[1440]: 2025-05-15 00:29:19.459 [INFO][4142] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c6e31ae61a8823cafe947fa86259fedaab20e9f65fdcc5ab0062a65a28a36ce9" host="localhost" May 15 00:29:19.494006 containerd[1440]: 2025-05-15 00:29:19.470 [INFO][4142] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c6e31ae61a8823cafe947fa86259fedaab20e9f65fdcc5ab0062a65a28a36ce9" host="localhost" May 15 00:29:19.494006 containerd[1440]: 2025-05-15 00:29:19.470 [INFO][4142] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c6e31ae61a8823cafe947fa86259fedaab20e9f65fdcc5ab0062a65a28a36ce9" host="localhost" May 15 00:29:19.494006 containerd[1440]: 2025-05-15 00:29:19.470 [INFO][4142] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:29:19.494006 containerd[1440]: 2025-05-15 00:29:19.470 [INFO][4142] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c6e31ae61a8823cafe947fa86259fedaab20e9f65fdcc5ab0062a65a28a36ce9" HandleID="k8s-pod-network.c6e31ae61a8823cafe947fa86259fedaab20e9f65fdcc5ab0062a65a28a36ce9" Workload="localhost-k8s-coredns--6f6b679f8f--v6wlt-eth0" May 15 00:29:19.494517 containerd[1440]: 2025-05-15 00:29:19.476 [INFO][4115] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c6e31ae61a8823cafe947fa86259fedaab20e9f65fdcc5ab0062a65a28a36ce9" Namespace="kube-system" Pod="coredns-6f6b679f8f-v6wlt" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--v6wlt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--v6wlt-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"936f2675-5caf-4d04-b8fa-7b40aacfa39d", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 28, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-v6wlt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e88a07bd86", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:29:19.494517 containerd[1440]: 2025-05-15 00:29:19.476 [INFO][4115] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c6e31ae61a8823cafe947fa86259fedaab20e9f65fdcc5ab0062a65a28a36ce9" Namespace="kube-system" Pod="coredns-6f6b679f8f-v6wlt" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--v6wlt-eth0" May 15 00:29:19.494517 containerd[1440]: 2025-05-15 00:29:19.476 [INFO][4115] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e88a07bd86 ContainerID="c6e31ae61a8823cafe947fa86259fedaab20e9f65fdcc5ab0062a65a28a36ce9" Namespace="kube-system" Pod="coredns-6f6b679f8f-v6wlt" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--v6wlt-eth0" May 15 00:29:19.494517 containerd[1440]: 2025-05-15 00:29:19.481 [INFO][4115] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6e31ae61a8823cafe947fa86259fedaab20e9f65fdcc5ab0062a65a28a36ce9" Namespace="kube-system" Pod="coredns-6f6b679f8f-v6wlt" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--v6wlt-eth0" May 15 00:29:19.494517 containerd[1440]: 2025-05-15 00:29:19.481 [INFO][4115] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c6e31ae61a8823cafe947fa86259fedaab20e9f65fdcc5ab0062a65a28a36ce9" Namespace="kube-system" Pod="coredns-6f6b679f8f-v6wlt" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--v6wlt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--v6wlt-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"936f2675-5caf-4d04-b8fa-7b40aacfa39d", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 28, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c6e31ae61a8823cafe947fa86259fedaab20e9f65fdcc5ab0062a65a28a36ce9", Pod:"coredns-6f6b679f8f-v6wlt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e88a07bd86", MAC:"de:be:5c:27:60:e5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:29:19.494517 containerd[1440]: 2025-05-15 00:29:19.491 [INFO][4115] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c6e31ae61a8823cafe947fa86259fedaab20e9f65fdcc5ab0062a65a28a36ce9" Namespace="kube-system" Pod="coredns-6f6b679f8f-v6wlt" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--v6wlt-eth0" May 15 00:29:19.512703 containerd[1440]: time="2025-05-15T00:29:19.512600570Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:29:19.512703 containerd[1440]: time="2025-05-15T00:29:19.512680810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:29:19.512703 containerd[1440]: time="2025-05-15T00:29:19.512694050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:29:19.513010 containerd[1440]: time="2025-05-15T00:29:19.512868770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:29:19.532982 systemd[1]: Started cri-containerd-c6e31ae61a8823cafe947fa86259fedaab20e9f65fdcc5ab0062a65a28a36ce9.scope - libcontainer container c6e31ae61a8823cafe947fa86259fedaab20e9f65fdcc5ab0062a65a28a36ce9. May 15 00:29:19.542724 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:29:19.573824 containerd[1440]: time="2025-05-15T00:29:19.573536778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v6wlt,Uid:936f2675-5caf-4d04-b8fa-7b40aacfa39d,Namespace:kube-system,Attempt:1,} returns sandbox id \"c6e31ae61a8823cafe947fa86259fedaab20e9f65fdcc5ab0062a65a28a36ce9\"" May 15 00:29:19.574854 kubelet[2466]: E0515 00:29:19.574809 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:19.579339 containerd[1440]: time="2025-05-15T00:29:19.579290503Z" level=info msg="CreateContainer within sandbox \"c6e31ae61a8823cafe947fa86259fedaab20e9f65fdcc5ab0062a65a28a36ce9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 00:29:19.595697 containerd[1440]: time="2025-05-15T00:29:19.595514555Z" level=info msg="CreateContainer within sandbox \"c6e31ae61a8823cafe947fa86259fedaab20e9f65fdcc5ab0062a65a28a36ce9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3a55c6b4b6a53a3c7b52f576bb7cda19cc28f2e122d51691c8e5d520e68f0aad\"" May 15 00:29:19.596468 containerd[1440]: time="2025-05-15T00:29:19.596441556Z" level=info msg="StartContainer for \"3a55c6b4b6a53a3c7b52f576bb7cda19cc28f2e122d51691c8e5d520e68f0aad\"" May 15 00:29:19.625978 systemd[1]: Started cri-containerd-3a55c6b4b6a53a3c7b52f576bb7cda19cc28f2e122d51691c8e5d520e68f0aad.scope - libcontainer container 3a55c6b4b6a53a3c7b52f576bb7cda19cc28f2e122d51691c8e5d520e68f0aad. May 15 00:29:19.656575 containerd[1440]: time="2025-05-15T00:29:19.656529243Z" level=info msg="StartContainer for \"3a55c6b4b6a53a3c7b52f576bb7cda19cc28f2e122d51691c8e5d520e68f0aad\" returns successfully" May 15 00:29:19.951913 systemd[1]: Started sshd@8-10.0.0.112:22-10.0.0.1:50464.service - OpenSSH per-connection server daemon (10.0.0.1:50464). May 15 00:29:20.000673 sshd[4371]: Accepted publickey for core from 10.0.0.1 port 50464 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:29:20.002627 sshd[4371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:29:20.006932 systemd-logind[1420]: New session 9 of user core. May 15 00:29:20.020976 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 00:29:20.060650 kubelet[2466]: E0515 00:29:20.060611 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:20.116632 kubelet[2466]: I0515 00:29:20.116110 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-v6wlt" podStartSLOduration=31.116093039 podStartE2EDuration="31.116093039s" podCreationTimestamp="2025-05-15 00:28:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:29:20.089549699 +0000 UTC m=+36.258358097" watchObservedRunningTime="2025-05-15 00:29:20.116093039 +0000 UTC m=+36.284901437" May 15 00:29:20.250746 sshd[4371]: pam_unix(sshd:session): session closed for user core May 15 00:29:20.256849 systemd[1]: sshd@8-10.0.0.112:22-10.0.0.1:50464.service: Deactivated successfully. May 15 00:29:20.258701 systemd[1]: session-9.scope: Deactivated successfully. May 15 00:29:20.260305 systemd-logind[1420]: Session 9 logged out. Waiting for processes to exit. May 15 00:29:20.261378 systemd-logind[1420]: Removed session 9. May 15 00:29:20.454025 systemd-networkd[1376]: calif9079e85f98: Gained IPv6LL May 15 00:29:20.868752 containerd[1440]: time="2025-05-15T00:29:20.868701593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:29:20.869747 containerd[1440]: time="2025-05-15T00:29:20.869656353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 15 00:29:20.870497 containerd[1440]: time="2025-05-15T00:29:20.870451074Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:29:20.873151 containerd[1440]: time="2025-05-15T00:29:20.873088836Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:29:20.874412 containerd[1440]: time="2025-05-15T00:29:20.873800196Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.393917092s" May 15 00:29:20.874412 containerd[1440]: time="2025-05-15T00:29:20.873836636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 15 00:29:20.883594 containerd[1440]: time="2025-05-15T00:29:20.881201842Z" level=info msg="CreateContainer within sandbox \"b7e2a2883c0b8d477d8432a7953a6614748a6f8a23e89905386636291dd552d6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 15 00:29:20.911037 containerd[1440]: time="2025-05-15T00:29:20.910998264Z" level=info msg="StopPodSandbox for \"b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0\"" May 15 00:29:20.911381 containerd[1440]: time="2025-05-15T00:29:20.911327504Z" level=info msg="StopPodSandbox for \"2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2\"" May 15 00:29:20.912860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1319915933.mount: Deactivated successfully. May 15 00:29:20.917936 containerd[1440]: time="2025-05-15T00:29:20.917880349Z" level=info msg="CreateContainer within sandbox \"b7e2a2883c0b8d477d8432a7953a6614748a6f8a23e89905386636291dd552d6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7b499109c359362ea1c0188621ae8bbdff89eda69ad79c402d0d58b59e13e7ca\"" May 15 00:29:20.918946 containerd[1440]: time="2025-05-15T00:29:20.918907550Z" level=info msg="StartContainer for \"7b499109c359362ea1c0188621ae8bbdff89eda69ad79c402d0d58b59e13e7ca\"" May 15 00:29:20.970931 systemd[1]: Started cri-containerd-7b499109c359362ea1c0188621ae8bbdff89eda69ad79c402d0d58b59e13e7ca.scope - libcontainer container 7b499109c359362ea1c0188621ae8bbdff89eda69ad79c402d0d58b59e13e7ca. May 15 00:29:21.018229 containerd[1440]: time="2025-05-15T00:29:21.017680542Z" level=info msg="StartContainer for \"7b499109c359362ea1c0188621ae8bbdff89eda69ad79c402d0d58b59e13e7ca\" returns successfully" May 15 00:29:21.024178 containerd[1440]: time="2025-05-15T00:29:21.024137106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 15 00:29:21.053949 containerd[1440]: 2025-05-15 00:29:21.004 [INFO][4427] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" May 15 00:29:21.053949 containerd[1440]: 2025-05-15 00:29:21.004 [INFO][4427] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" iface="eth0" netns="/var/run/netns/cni-b405ed66-a9d9-0bb1-404f-2651a7addbbf" May 15 00:29:21.053949 containerd[1440]: 2025-05-15 00:29:21.005 [INFO][4427] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" iface="eth0" netns="/var/run/netns/cni-b405ed66-a9d9-0bb1-404f-2651a7addbbf" May 15 00:29:21.053949 containerd[1440]: 2025-05-15 00:29:21.005 [INFO][4427] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" iface="eth0" netns="/var/run/netns/cni-b405ed66-a9d9-0bb1-404f-2651a7addbbf" May 15 00:29:21.053949 containerd[1440]: 2025-05-15 00:29:21.005 [INFO][4427] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" May 15 00:29:21.053949 containerd[1440]: 2025-05-15 00:29:21.005 [INFO][4427] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" May 15 00:29:21.053949 containerd[1440]: 2025-05-15 00:29:21.034 [INFO][4482] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" HandleID="k8s-pod-network.b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" Workload="localhost-k8s-calico--kube--controllers--84874c75bf--xm54f-eth0" May 15 00:29:21.053949 containerd[1440]: 2025-05-15 00:29:21.035 [INFO][4482] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:29:21.053949 containerd[1440]: 2025-05-15 00:29:21.035 [INFO][4482] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:29:21.053949 containerd[1440]: 2025-05-15 00:29:21.043 [WARNING][4482] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" HandleID="k8s-pod-network.b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" Workload="localhost-k8s-calico--kube--controllers--84874c75bf--xm54f-eth0" May 15 00:29:21.053949 containerd[1440]: 2025-05-15 00:29:21.043 [INFO][4482] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" HandleID="k8s-pod-network.b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" Workload="localhost-k8s-calico--kube--controllers--84874c75bf--xm54f-eth0" May 15 00:29:21.053949 containerd[1440]: 2025-05-15 00:29:21.045 [INFO][4482] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:29:21.053949 containerd[1440]: 2025-05-15 00:29:21.050 [INFO][4427] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" May 15 00:29:21.054380 containerd[1440]: time="2025-05-15T00:29:21.054159407Z" level=info msg="TearDown network for sandbox \"b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0\" successfully" May 15 00:29:21.054380 containerd[1440]: time="2025-05-15T00:29:21.054200767Z" level=info msg="StopPodSandbox for \"b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0\" returns successfully" May 15 00:29:21.055268 containerd[1440]: time="2025-05-15T00:29:21.055232168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84874c75bf-xm54f,Uid:cdc568ed-985b-493b-94ee-815dbe88ed76,Namespace:calico-system,Attempt:1,}" May 15 00:29:21.063488 containerd[1440]: 2025-05-15 00:29:21.023 [INFO][4437] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" May 15 00:29:21.063488 containerd[1440]: 2025-05-15 00:29:21.023 [INFO][4437] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" iface="eth0" netns="/var/run/netns/cni-dffeb626-376b-7bcf-2591-a730b23b22aa" May 15 00:29:21.063488 containerd[1440]: 2025-05-15 00:29:21.024 [INFO][4437] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" iface="eth0" netns="/var/run/netns/cni-dffeb626-376b-7bcf-2591-a730b23b22aa" May 15 00:29:21.063488 containerd[1440]: 2025-05-15 00:29:21.025 [INFO][4437] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" iface="eth0" netns="/var/run/netns/cni-dffeb626-376b-7bcf-2591-a730b23b22aa" May 15 00:29:21.063488 containerd[1440]: 2025-05-15 00:29:21.025 [INFO][4437] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" May 15 00:29:21.063488 containerd[1440]: 2025-05-15 00:29:21.025 [INFO][4437] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" May 15 00:29:21.063488 containerd[1440]: 2025-05-15 00:29:21.045 [INFO][4489] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" HandleID="k8s-pod-network.2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" Workload="localhost-k8s-calico--apiserver--79d7d69dbd--qwhnz-eth0" May 15 00:29:21.063488 containerd[1440]: 2025-05-15 00:29:21.045 [INFO][4489] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:29:21.063488 containerd[1440]: 2025-05-15 00:29:21.045 [INFO][4489] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:29:21.063488 containerd[1440]: 2025-05-15 00:29:21.057 [WARNING][4489] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" HandleID="k8s-pod-network.2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" Workload="localhost-k8s-calico--apiserver--79d7d69dbd--qwhnz-eth0" May 15 00:29:21.063488 containerd[1440]: 2025-05-15 00:29:21.057 [INFO][4489] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" HandleID="k8s-pod-network.2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" Workload="localhost-k8s-calico--apiserver--79d7d69dbd--qwhnz-eth0" May 15 00:29:21.063488 containerd[1440]: 2025-05-15 00:29:21.058 [INFO][4489] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:29:21.063488 containerd[1440]: 2025-05-15 00:29:21.060 [INFO][4437] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" May 15 00:29:21.064209 containerd[1440]: time="2025-05-15T00:29:21.064181494Z" level=info msg="TearDown network for sandbox \"2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2\" successfully" May 15 00:29:21.064303 containerd[1440]: time="2025-05-15T00:29:21.064208894Z" level=info msg="StopPodSandbox for \"2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2\" returns successfully" May 15 00:29:21.064808 containerd[1440]: time="2025-05-15T00:29:21.064738814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d7d69dbd-qwhnz,Uid:23f642bf-ba69-45cb-b737-8658c85d25d9,Namespace:calico-apiserver,Attempt:1,}" May 15 00:29:21.066931 kubelet[2466]: E0515 00:29:21.066855 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:21.157372 systemd[1]: run-netns-cni\x2db405ed66\x2da9d9\x2d0bb1\x2d404f\x2d2651a7addbbf.mount: Deactivated successfully. May 15 00:29:21.157451 systemd[1]: run-netns-cni\x2ddffeb626\x2d376b\x2d7bcf\x2d2591\x2da730b23b22aa.mount: Deactivated successfully. May 15 00:29:21.202125 systemd-networkd[1376]: cali2fc2c114722: Link UP May 15 00:29:21.202336 systemd-networkd[1376]: cali2fc2c114722: Gained carrier May 15 00:29:21.214676 containerd[1440]: 2025-05-15 00:29:21.107 [INFO][4498] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--84874c75bf--xm54f-eth0 calico-kube-controllers-84874c75bf- calico-system cdc568ed-985b-493b-94ee-815dbe88ed76 852 0 2025-05-15 00:28:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:84874c75bf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-84874c75bf-xm54f eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2fc2c114722 [] []}} ContainerID="ea6bab460062e34d7a5da1b2a5fcc9f7543f45543c0d14e13b47032782a5ffb9" Namespace="calico-system" Pod="calico-kube-controllers-84874c75bf-xm54f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84874c75bf--xm54f-" May 15 00:29:21.214676 containerd[1440]: 2025-05-15 00:29:21.107 [INFO][4498] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ea6bab460062e34d7a5da1b2a5fcc9f7543f45543c0d14e13b47032782a5ffb9" Namespace="calico-system" Pod="calico-kube-controllers-84874c75bf-xm54f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84874c75bf--xm54f-eth0" May 15 00:29:21.214676 containerd[1440]: 2025-05-15 00:29:21.142 [INFO][4526] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea6bab460062e34d7a5da1b2a5fcc9f7543f45543c0d14e13b47032782a5ffb9" HandleID="k8s-pod-network.ea6bab460062e34d7a5da1b2a5fcc9f7543f45543c0d14e13b47032782a5ffb9" Workload="localhost-k8s-calico--kube--controllers--84874c75bf--xm54f-eth0" May 15 00:29:21.214676 containerd[1440]: 2025-05-15 00:29:21.159 [INFO][4526] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ea6bab460062e34d7a5da1b2a5fcc9f7543f45543c0d14e13b47032782a5ffb9" HandleID="k8s-pod-network.ea6bab460062e34d7a5da1b2a5fcc9f7543f45543c0d14e13b47032782a5ffb9" Workload="localhost-k8s-calico--kube--controllers--84874c75bf--xm54f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e7e00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-84874c75bf-xm54f", "timestamp":"2025-05-15 00:29:21.142356428 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:29:21.214676 containerd[1440]: 2025-05-15 00:29:21.159 [INFO][4526] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:29:21.214676 containerd[1440]: 2025-05-15 00:29:21.159 [INFO][4526] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:29:21.214676 containerd[1440]: 2025-05-15 00:29:21.159 [INFO][4526] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 00:29:21.214676 containerd[1440]: 2025-05-15 00:29:21.166 [INFO][4526] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ea6bab460062e34d7a5da1b2a5fcc9f7543f45543c0d14e13b47032782a5ffb9" host="localhost" May 15 00:29:21.214676 containerd[1440]: 2025-05-15 00:29:21.170 [INFO][4526] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 00:29:21.214676 containerd[1440]: 2025-05-15 00:29:21.175 [INFO][4526] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 00:29:21.214676 containerd[1440]: 2025-05-15 00:29:21.182 [INFO][4526] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 00:29:21.214676 containerd[1440]: 2025-05-15 00:29:21.184 [INFO][4526] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 00:29:21.214676 containerd[1440]: 2025-05-15 00:29:21.184 [INFO][4526] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ea6bab460062e34d7a5da1b2a5fcc9f7543f45543c0d14e13b47032782a5ffb9" host="localhost" May 15 00:29:21.214676 containerd[1440]: 2025-05-15 00:29:21.186 [INFO][4526] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ea6bab460062e34d7a5da1b2a5fcc9f7543f45543c0d14e13b47032782a5ffb9 May 15 00:29:21.214676 containerd[1440]: 2025-05-15 00:29:21.189 [INFO][4526] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ea6bab460062e34d7a5da1b2a5fcc9f7543f45543c0d14e13b47032782a5ffb9" host="localhost" May 15 00:29:21.214676 containerd[1440]: 2025-05-15 00:29:21.195 [INFO][4526] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.ea6bab460062e34d7a5da1b2a5fcc9f7543f45543c0d14e13b47032782a5ffb9" host="localhost" May 15 00:29:21.214676 containerd[1440]: 2025-05-15 00:29:21.195 [INFO][4526] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.ea6bab460062e34d7a5da1b2a5fcc9f7543f45543c0d14e13b47032782a5ffb9" host="localhost" May 15 00:29:21.214676 containerd[1440]: 2025-05-15 00:29:21.195 [INFO][4526] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:29:21.214676 containerd[1440]: 2025-05-15 00:29:21.195 [INFO][4526] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="ea6bab460062e34d7a5da1b2a5fcc9f7543f45543c0d14e13b47032782a5ffb9" HandleID="k8s-pod-network.ea6bab460062e34d7a5da1b2a5fcc9f7543f45543c0d14e13b47032782a5ffb9" Workload="localhost-k8s-calico--kube--controllers--84874c75bf--xm54f-eth0" May 15 00:29:21.216246 containerd[1440]: 2025-05-15 00:29:21.199 [INFO][4498] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ea6bab460062e34d7a5da1b2a5fcc9f7543f45543c0d14e13b47032782a5ffb9" Namespace="calico-system" Pod="calico-kube-controllers-84874c75bf-xm54f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84874c75bf--xm54f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84874c75bf--xm54f-eth0", GenerateName:"calico-kube-controllers-84874c75bf-", Namespace:"calico-system", SelfLink:"", UID:"cdc568ed-985b-493b-94ee-815dbe88ed76", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 28, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84874c75bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-84874c75bf-xm54f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2fc2c114722", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:29:21.216246 containerd[1440]: 2025-05-15 00:29:21.199 [INFO][4498] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="ea6bab460062e34d7a5da1b2a5fcc9f7543f45543c0d14e13b47032782a5ffb9" Namespace="calico-system" Pod="calico-kube-controllers-84874c75bf-xm54f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84874c75bf--xm54f-eth0" May 15 00:29:21.216246 containerd[1440]: 2025-05-15 00:29:21.199 [INFO][4498] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2fc2c114722 ContainerID="ea6bab460062e34d7a5da1b2a5fcc9f7543f45543c0d14e13b47032782a5ffb9" Namespace="calico-system" Pod="calico-kube-controllers-84874c75bf-xm54f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84874c75bf--xm54f-eth0" May 15 00:29:21.216246 containerd[1440]: 2025-05-15 00:29:21.202 [INFO][4498] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea6bab460062e34d7a5da1b2a5fcc9f7543f45543c0d14e13b47032782a5ffb9" Namespace="calico-system" Pod="calico-kube-controllers-84874c75bf-xm54f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84874c75bf--xm54f-eth0" May 15 00:29:21.216246 containerd[1440]: 2025-05-15 00:29:21.202 [INFO][4498] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ea6bab460062e34d7a5da1b2a5fcc9f7543f45543c0d14e13b47032782a5ffb9" Namespace="calico-system" Pod="calico-kube-controllers-84874c75bf-xm54f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84874c75bf--xm54f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84874c75bf--xm54f-eth0", GenerateName:"calico-kube-controllers-84874c75bf-", Namespace:"calico-system", SelfLink:"", UID:"cdc568ed-985b-493b-94ee-815dbe88ed76", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 28, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84874c75bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea6bab460062e34d7a5da1b2a5fcc9f7543f45543c0d14e13b47032782a5ffb9", Pod:"calico-kube-controllers-84874c75bf-xm54f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2fc2c114722", MAC:"32:8e:8a:99:7d:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:29:21.216246 containerd[1440]: 2025-05-15 00:29:21.212 [INFO][4498] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ea6bab460062e34d7a5da1b2a5fcc9f7543f45543c0d14e13b47032782a5ffb9" Namespace="calico-system" Pod="calico-kube-controllers-84874c75bf-xm54f" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84874c75bf--xm54f-eth0" May 15 00:29:21.233448 containerd[1440]: time="2025-05-15T00:29:21.233280610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:29:21.233448 containerd[1440]: time="2025-05-15T00:29:21.233342130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:29:21.233448 containerd[1440]: time="2025-05-15T00:29:21.233355370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:29:21.234135 containerd[1440]: time="2025-05-15T00:29:21.233442971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:29:21.250974 systemd[1]: Started cri-containerd-ea6bab460062e34d7a5da1b2a5fcc9f7543f45543c0d14e13b47032782a5ffb9.scope - libcontainer container ea6bab460062e34d7a5da1b2a5fcc9f7543f45543c0d14e13b47032782a5ffb9. May 15 00:29:21.261132 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:29:21.281287 containerd[1440]: time="2025-05-15T00:29:21.280949403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84874c75bf-xm54f,Uid:cdc568ed-985b-493b-94ee-815dbe88ed76,Namespace:calico-system,Attempt:1,} returns sandbox id \"ea6bab460062e34d7a5da1b2a5fcc9f7543f45543c0d14e13b47032782a5ffb9\"" May 15 00:29:21.286193 systemd-networkd[1376]: vxlan.calico: Gained IPv6LL May 15 00:29:21.305082 systemd-networkd[1376]: caliee1a68071ff: Link UP May 15 00:29:21.305339 systemd-networkd[1376]: caliee1a68071ff: Gained carrier May 15 00:29:21.337717 containerd[1440]: 2025-05-15 00:29:21.110 [INFO][4510] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--79d7d69dbd--qwhnz-eth0 calico-apiserver-79d7d69dbd- calico-apiserver 23f642bf-ba69-45cb-b737-8658c85d25d9 853 0 2025-05-15 00:28:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79d7d69dbd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-79d7d69dbd-qwhnz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliee1a68071ff [] []}} ContainerID="22fd29bfb2fec9c0ae9e3a5ec38177a835c9887ee14fdea8d742d773670b5226" Namespace="calico-apiserver" Pod="calico-apiserver-79d7d69dbd-qwhnz" WorkloadEndpoint="localhost-k8s-calico--apiserver--79d7d69dbd--qwhnz-" May 15 00:29:21.337717 containerd[1440]: 2025-05-15 00:29:21.111 [INFO][4510] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="22fd29bfb2fec9c0ae9e3a5ec38177a835c9887ee14fdea8d742d773670b5226" Namespace="calico-apiserver" Pod="calico-apiserver-79d7d69dbd-qwhnz" WorkloadEndpoint="localhost-k8s-calico--apiserver--79d7d69dbd--qwhnz-eth0" May 15 00:29:21.337717 containerd[1440]: 2025-05-15 00:29:21.158 [INFO][4533] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="22fd29bfb2fec9c0ae9e3a5ec38177a835c9887ee14fdea8d742d773670b5226" HandleID="k8s-pod-network.22fd29bfb2fec9c0ae9e3a5ec38177a835c9887ee14fdea8d742d773670b5226" Workload="localhost-k8s-calico--apiserver--79d7d69dbd--qwhnz-eth0" May 15 00:29:21.337717 containerd[1440]: 2025-05-15 00:29:21.170 [INFO][4533] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="22fd29bfb2fec9c0ae9e3a5ec38177a835c9887ee14fdea8d742d773670b5226" HandleID="k8s-pod-network.22fd29bfb2fec9c0ae9e3a5ec38177a835c9887ee14fdea8d742d773670b5226" Workload="localhost-k8s-calico--apiserver--79d7d69dbd--qwhnz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000428b40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-79d7d69dbd-qwhnz", "timestamp":"2025-05-15 00:29:21.158718639 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:29:21.337717 containerd[1440]: 2025-05-15 00:29:21.170 [INFO][4533] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:29:21.337717 containerd[1440]: 2025-05-15 00:29:21.195 [INFO][4533] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:29:21.337717 containerd[1440]: 2025-05-15 00:29:21.196 [INFO][4533] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 00:29:21.337717 containerd[1440]: 2025-05-15 00:29:21.267 [INFO][4533] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.22fd29bfb2fec9c0ae9e3a5ec38177a835c9887ee14fdea8d742d773670b5226" host="localhost" May 15 00:29:21.337717 containerd[1440]: 2025-05-15 00:29:21.273 [INFO][4533] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 00:29:21.337717 containerd[1440]: 2025-05-15 00:29:21.279 [INFO][4533] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 00:29:21.337717 containerd[1440]: 2025-05-15 00:29:21.281 [INFO][4533] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 00:29:21.337717 containerd[1440]: 2025-05-15 00:29:21.285 [INFO][4533] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 00:29:21.337717 containerd[1440]: 2025-05-15 00:29:21.285 [INFO][4533] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.22fd29bfb2fec9c0ae9e3a5ec38177a835c9887ee14fdea8d742d773670b5226" host="localhost" May 15 00:29:21.337717 containerd[1440]: 2025-05-15 00:29:21.287 [INFO][4533] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.22fd29bfb2fec9c0ae9e3a5ec38177a835c9887ee14fdea8d742d773670b5226 May 15 00:29:21.337717 containerd[1440]: 2025-05-15 00:29:21.292 [INFO][4533] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.22fd29bfb2fec9c0ae9e3a5ec38177a835c9887ee14fdea8d742d773670b5226" host="localhost" May 15 00:29:21.337717 containerd[1440]: 2025-05-15 00:29:21.297 [INFO][4533] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.22fd29bfb2fec9c0ae9e3a5ec38177a835c9887ee14fdea8d742d773670b5226" host="localhost" May 15 00:29:21.337717 containerd[1440]: 2025-05-15 00:29:21.297 [INFO][4533] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.22fd29bfb2fec9c0ae9e3a5ec38177a835c9887ee14fdea8d742d773670b5226" host="localhost" May 15 00:29:21.337717 containerd[1440]: 2025-05-15 00:29:21.297 [INFO][4533] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:29:21.337717 containerd[1440]: 2025-05-15 00:29:21.297 [INFO][4533] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="22fd29bfb2fec9c0ae9e3a5ec38177a835c9887ee14fdea8d742d773670b5226" HandleID="k8s-pod-network.22fd29bfb2fec9c0ae9e3a5ec38177a835c9887ee14fdea8d742d773670b5226" Workload="localhost-k8s-calico--apiserver--79d7d69dbd--qwhnz-eth0" May 15 00:29:21.338340 containerd[1440]: 2025-05-15 00:29:21.299 [INFO][4510] cni-plugin/k8s.go 386: Populated endpoint ContainerID="22fd29bfb2fec9c0ae9e3a5ec38177a835c9887ee14fdea8d742d773670b5226" Namespace="calico-apiserver" Pod="calico-apiserver-79d7d69dbd-qwhnz" WorkloadEndpoint="localhost-k8s-calico--apiserver--79d7d69dbd--qwhnz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79d7d69dbd--qwhnz-eth0", GenerateName:"calico-apiserver-79d7d69dbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"23f642bf-ba69-45cb-b737-8658c85d25d9", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 28, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d7d69dbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-79d7d69dbd-qwhnz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliee1a68071ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:29:21.338340 containerd[1440]: 2025-05-15 00:29:21.299 [INFO][4510] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="22fd29bfb2fec9c0ae9e3a5ec38177a835c9887ee14fdea8d742d773670b5226" Namespace="calico-apiserver" Pod="calico-apiserver-79d7d69dbd-qwhnz" WorkloadEndpoint="localhost-k8s-calico--apiserver--79d7d69dbd--qwhnz-eth0" May 15 00:29:21.338340 containerd[1440]: 2025-05-15 00:29:21.299 [INFO][4510] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliee1a68071ff ContainerID="22fd29bfb2fec9c0ae9e3a5ec38177a835c9887ee14fdea8d742d773670b5226" Namespace="calico-apiserver" Pod="calico-apiserver-79d7d69dbd-qwhnz" WorkloadEndpoint="localhost-k8s-calico--apiserver--79d7d69dbd--qwhnz-eth0" May 15 00:29:21.338340 containerd[1440]: 2025-05-15 00:29:21.305 [INFO][4510] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="22fd29bfb2fec9c0ae9e3a5ec38177a835c9887ee14fdea8d742d773670b5226" Namespace="calico-apiserver" Pod="calico-apiserver-79d7d69dbd-qwhnz" WorkloadEndpoint="localhost-k8s-calico--apiserver--79d7d69dbd--qwhnz-eth0" May 15 00:29:21.338340 containerd[1440]: 2025-05-15 00:29:21.306 [INFO][4510] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="22fd29bfb2fec9c0ae9e3a5ec38177a835c9887ee14fdea8d742d773670b5226" Namespace="calico-apiserver" Pod="calico-apiserver-79d7d69dbd-qwhnz" WorkloadEndpoint="localhost-k8s-calico--apiserver--79d7d69dbd--qwhnz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79d7d69dbd--qwhnz-eth0", GenerateName:"calico-apiserver-79d7d69dbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"23f642bf-ba69-45cb-b737-8658c85d25d9", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 28, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d7d69dbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"22fd29bfb2fec9c0ae9e3a5ec38177a835c9887ee14fdea8d742d773670b5226", Pod:"calico-apiserver-79d7d69dbd-qwhnz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliee1a68071ff", MAC:"8e:b4:43:eb:ff:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:29:21.338340 containerd[1440]: 2025-05-15 00:29:21.335 [INFO][4510] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="22fd29bfb2fec9c0ae9e3a5ec38177a835c9887ee14fdea8d742d773670b5226" Namespace="calico-apiserver" Pod="calico-apiserver-79d7d69dbd-qwhnz" WorkloadEndpoint="localhost-k8s-calico--apiserver--79d7d69dbd--qwhnz-eth0" May 15 00:29:21.358396 containerd[1440]: time="2025-05-15T00:29:21.358277977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:29:21.358396 containerd[1440]: time="2025-05-15T00:29:21.358349537Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:29:21.358396 containerd[1440]: time="2025-05-15T00:29:21.358360497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:29:21.358664 containerd[1440]: time="2025-05-15T00:29:21.358436857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:29:21.379019 systemd[1]: Started cri-containerd-22fd29bfb2fec9c0ae9e3a5ec38177a835c9887ee14fdea8d742d773670b5226.scope - libcontainer container 22fd29bfb2fec9c0ae9e3a5ec38177a835c9887ee14fdea8d742d773670b5226. May 15 00:29:21.389140 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:29:21.409685 containerd[1440]: time="2025-05-15T00:29:21.409560172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d7d69dbd-qwhnz,Uid:23f642bf-ba69-45cb-b737-8658c85d25d9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"22fd29bfb2fec9c0ae9e3a5ec38177a835c9887ee14fdea8d742d773670b5226\"" May 15 00:29:21.477921 systemd-networkd[1376]: cali9e88a07bd86: Gained IPv6LL May 15 00:29:21.910297 containerd[1440]: time="2025-05-15T00:29:21.910203238Z" level=info msg="StopPodSandbox for \"a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635\"" May 15 00:29:21.989827 containerd[1440]: 2025-05-15 00:29:21.955 [INFO][4667] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" May 15 00:29:21.989827 containerd[1440]: 2025-05-15 00:29:21.956 [INFO][4667] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" iface="eth0" netns="/var/run/netns/cni-9fe478d5-1434-3e1d-bbf9-4a5d5e55510a" May 15 00:29:21.989827 containerd[1440]: 2025-05-15 00:29:21.956 [INFO][4667] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" iface="eth0" netns="/var/run/netns/cni-9fe478d5-1434-3e1d-bbf9-4a5d5e55510a" May 15 00:29:21.989827 containerd[1440]: 2025-05-15 00:29:21.956 [INFO][4667] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" iface="eth0" netns="/var/run/netns/cni-9fe478d5-1434-3e1d-bbf9-4a5d5e55510a" May 15 00:29:21.989827 containerd[1440]: 2025-05-15 00:29:21.956 [INFO][4667] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" May 15 00:29:21.989827 containerd[1440]: 2025-05-15 00:29:21.956 [INFO][4667] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" May 15 00:29:21.989827 containerd[1440]: 2025-05-15 00:29:21.976 [INFO][4675] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" HandleID="k8s-pod-network.a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" Workload="localhost-k8s-coredns--6f6b679f8f--gtg4n-eth0" May 15 00:29:21.989827 containerd[1440]: 2025-05-15 00:29:21.976 [INFO][4675] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:29:21.989827 containerd[1440]: 2025-05-15 00:29:21.976 [INFO][4675] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:29:21.989827 containerd[1440]: 2025-05-15 00:29:21.985 [WARNING][4675] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" HandleID="k8s-pod-network.a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" Workload="localhost-k8s-coredns--6f6b679f8f--gtg4n-eth0" May 15 00:29:21.989827 containerd[1440]: 2025-05-15 00:29:21.985 [INFO][4675] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" HandleID="k8s-pod-network.a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" Workload="localhost-k8s-coredns--6f6b679f8f--gtg4n-eth0" May 15 00:29:21.989827 containerd[1440]: 2025-05-15 00:29:21.986 [INFO][4675] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:29:21.989827 containerd[1440]: 2025-05-15 00:29:21.988 [INFO][4667] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" May 15 00:29:21.990217 containerd[1440]: time="2025-05-15T00:29:21.990053773Z" level=info msg="TearDown network for sandbox \"a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635\" successfully" May 15 00:29:21.990217 containerd[1440]: time="2025-05-15T00:29:21.990080173Z" level=info msg="StopPodSandbox for \"a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635\" returns successfully" May 15 00:29:21.990452 kubelet[2466]: E0515 00:29:21.990425 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:21.991400 containerd[1440]: time="2025-05-15T00:29:21.991356134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gtg4n,Uid:4b70c740-ad72-450d-bead-e209924ab516,Namespace:kube-system,Attempt:1,}" May 15 00:29:22.077398 kubelet[2466]: E0515 00:29:22.076163 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:22.139564 systemd-networkd[1376]: cali5a95f8aeb86: Link UP May 15 00:29:22.139796 systemd-networkd[1376]: cali5a95f8aeb86: Gained carrier May 15 00:29:22.156217 systemd[1]: run-netns-cni\x2d9fe478d5\x2d1434\x2d3e1d\x2dbbf9\x2d4a5d5e55510a.mount: Deactivated successfully. May 15 00:29:22.160286 containerd[1440]: 2025-05-15 00:29:22.039 [INFO][4684] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--gtg4n-eth0 coredns-6f6b679f8f- kube-system 4b70c740-ad72-450d-bead-e209924ab516 873 0 2025-05-15 00:28:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-gtg4n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5a95f8aeb86 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5adae3fc37731fe6ad0b1ab84926f72b3d082e054a0de033d900d312bbe8ce71" Namespace="kube-system" Pod="coredns-6f6b679f8f-gtg4n" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--gtg4n-" May 15 00:29:22.160286 containerd[1440]: 2025-05-15 00:29:22.039 [INFO][4684] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5adae3fc37731fe6ad0b1ab84926f72b3d082e054a0de033d900d312bbe8ce71" Namespace="kube-system" Pod="coredns-6f6b679f8f-gtg4n" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--gtg4n-eth0" May 15 00:29:22.160286 containerd[1440]: 2025-05-15 00:29:22.085 [INFO][4698] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5adae3fc37731fe6ad0b1ab84926f72b3d082e054a0de033d900d312bbe8ce71" HandleID="k8s-pod-network.5adae3fc37731fe6ad0b1ab84926f72b3d082e054a0de033d900d312bbe8ce71" Workload="localhost-k8s-coredns--6f6b679f8f--gtg4n-eth0" May 15 00:29:22.160286 containerd[1440]: 2025-05-15 00:29:22.100 [INFO][4698] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5adae3fc37731fe6ad0b1ab84926f72b3d082e054a0de033d900d312bbe8ce71" HandleID="k8s-pod-network.5adae3fc37731fe6ad0b1ab84926f72b3d082e054a0de033d900d312bbe8ce71" Workload="localhost-k8s-coredns--6f6b679f8f--gtg4n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400052c0a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-gtg4n", "timestamp":"2025-05-15 00:29:22.085265395 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:29:22.160286 containerd[1440]: 2025-05-15 00:29:22.100 [INFO][4698] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:29:22.160286 containerd[1440]: 2025-05-15 00:29:22.100 [INFO][4698] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:29:22.160286 containerd[1440]: 2025-05-15 00:29:22.100 [INFO][4698] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 00:29:22.160286 containerd[1440]: 2025-05-15 00:29:22.102 [INFO][4698] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5adae3fc37731fe6ad0b1ab84926f72b3d082e054a0de033d900d312bbe8ce71" host="localhost" May 15 00:29:22.160286 containerd[1440]: 2025-05-15 00:29:22.107 [INFO][4698] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 00:29:22.160286 containerd[1440]: 2025-05-15 00:29:22.112 [INFO][4698] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 00:29:22.160286 containerd[1440]: 2025-05-15 00:29:22.114 [INFO][4698] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 00:29:22.160286 containerd[1440]: 2025-05-15 00:29:22.118 [INFO][4698] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 00:29:22.160286 containerd[1440]: 2025-05-15 00:29:22.118 [INFO][4698] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5adae3fc37731fe6ad0b1ab84926f72b3d082e054a0de033d900d312bbe8ce71" host="localhost" May 15 00:29:22.160286 containerd[1440]: 2025-05-15 00:29:22.120 [INFO][4698] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5adae3fc37731fe6ad0b1ab84926f72b3d082e054a0de033d900d312bbe8ce71 May 15 00:29:22.160286 containerd[1440]: 2025-05-15 00:29:22.124 [INFO][4698] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5adae3fc37731fe6ad0b1ab84926f72b3d082e054a0de033d900d312bbe8ce71" host="localhost" May 15 00:29:22.160286 containerd[1440]: 2025-05-15 00:29:22.133 [INFO][4698] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.5adae3fc37731fe6ad0b1ab84926f72b3d082e054a0de033d900d312bbe8ce71" host="localhost" May 15 00:29:22.160286 containerd[1440]: 2025-05-15 00:29:22.133 [INFO][4698] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.5adae3fc37731fe6ad0b1ab84926f72b3d082e054a0de033d900d312bbe8ce71" host="localhost" May 15 00:29:22.160286 containerd[1440]: 2025-05-15 00:29:22.133 [INFO][4698] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:29:22.160286 containerd[1440]: 2025-05-15 00:29:22.133 [INFO][4698] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="5adae3fc37731fe6ad0b1ab84926f72b3d082e054a0de033d900d312bbe8ce71" HandleID="k8s-pod-network.5adae3fc37731fe6ad0b1ab84926f72b3d082e054a0de033d900d312bbe8ce71" Workload="localhost-k8s-coredns--6f6b679f8f--gtg4n-eth0" May 15 00:29:22.160866 containerd[1440]: 2025-05-15 00:29:22.135 [INFO][4684] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5adae3fc37731fe6ad0b1ab84926f72b3d082e054a0de033d900d312bbe8ce71" Namespace="kube-system" Pod="coredns-6f6b679f8f-gtg4n" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--gtg4n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--gtg4n-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"4b70c740-ad72-450d-bead-e209924ab516", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 28, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-gtg4n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5a95f8aeb86", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:29:22.160866 containerd[1440]: 2025-05-15 00:29:22.135 [INFO][4684] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="5adae3fc37731fe6ad0b1ab84926f72b3d082e054a0de033d900d312bbe8ce71" Namespace="kube-system" Pod="coredns-6f6b679f8f-gtg4n" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--gtg4n-eth0" May 15 00:29:22.160866 containerd[1440]: 2025-05-15 00:29:22.135 [INFO][4684] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a95f8aeb86 ContainerID="5adae3fc37731fe6ad0b1ab84926f72b3d082e054a0de033d900d312bbe8ce71" Namespace="kube-system" Pod="coredns-6f6b679f8f-gtg4n" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--gtg4n-eth0" May 15 00:29:22.160866 containerd[1440]: 2025-05-15 00:29:22.140 [INFO][4684] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5adae3fc37731fe6ad0b1ab84926f72b3d082e054a0de033d900d312bbe8ce71" Namespace="kube-system" Pod="coredns-6f6b679f8f-gtg4n" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--gtg4n-eth0" May 15 00:29:22.160866 containerd[1440]: 2025-05-15 00:29:22.142 [INFO][4684] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5adae3fc37731fe6ad0b1ab84926f72b3d082e054a0de033d900d312bbe8ce71" Namespace="kube-system" Pod="coredns-6f6b679f8f-gtg4n" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--gtg4n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--gtg4n-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"4b70c740-ad72-450d-bead-e209924ab516", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 28, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5adae3fc37731fe6ad0b1ab84926f72b3d082e054a0de033d900d312bbe8ce71", Pod:"coredns-6f6b679f8f-gtg4n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5a95f8aeb86", MAC:"e2:a9:e0:77:2c:42", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:29:22.160866 containerd[1440]: 2025-05-15 00:29:22.156 [INFO][4684] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5adae3fc37731fe6ad0b1ab84926f72b3d082e054a0de033d900d312bbe8ce71" Namespace="kube-system" Pod="coredns-6f6b679f8f-gtg4n" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--gtg4n-eth0" May 15 00:29:22.185572 containerd[1440]: time="2025-05-15T00:29:22.185492740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:29:22.185572 containerd[1440]: time="2025-05-15T00:29:22.185543380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:29:22.185880 containerd[1440]: time="2025-05-15T00:29:22.185555100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:29:22.185880 containerd[1440]: time="2025-05-15T00:29:22.185643060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:29:22.208019 systemd[1]: Started cri-containerd-5adae3fc37731fe6ad0b1ab84926f72b3d082e054a0de033d900d312bbe8ce71.scope - libcontainer container 5adae3fc37731fe6ad0b1ab84926f72b3d082e054a0de033d900d312bbe8ce71. May 15 00:29:22.221793 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:29:22.246097 containerd[1440]: time="2025-05-15T00:29:22.246047939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gtg4n,Uid:4b70c740-ad72-450d-bead-e209924ab516,Namespace:kube-system,Attempt:1,} returns sandbox id \"5adae3fc37731fe6ad0b1ab84926f72b3d082e054a0de033d900d312bbe8ce71\"" May 15 00:29:22.247162 kubelet[2466]: E0515 00:29:22.247139 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:22.250563 containerd[1440]: time="2025-05-15T00:29:22.250367462Z" level=info msg="CreateContainer within sandbox \"5adae3fc37731fe6ad0b1ab84926f72b3d082e054a0de033d900d312bbe8ce71\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 00:29:22.268351 containerd[1440]: time="2025-05-15T00:29:22.268288393Z" level=info msg="CreateContainer within sandbox \"5adae3fc37731fe6ad0b1ab84926f72b3d082e054a0de033d900d312bbe8ce71\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b9faadab8f38e15eb548c5959acd29cb56ea4b36300e7d3ad33b5e1e585a6172\"" May 15 00:29:22.269406 containerd[1440]: time="2025-05-15T00:29:22.269378234Z" level=info msg="StartContainer for \"b9faadab8f38e15eb548c5959acd29cb56ea4b36300e7d3ad33b5e1e585a6172\"" May 15 00:29:22.301984 systemd[1]: Started cri-containerd-b9faadab8f38e15eb548c5959acd29cb56ea4b36300e7d3ad33b5e1e585a6172.scope - libcontainer container b9faadab8f38e15eb548c5959acd29cb56ea4b36300e7d3ad33b5e1e585a6172. May 15 00:29:22.348882 containerd[1440]: time="2025-05-15T00:29:22.348834566Z" level=info msg="StartContainer for \"b9faadab8f38e15eb548c5959acd29cb56ea4b36300e7d3ad33b5e1e585a6172\" returns successfully" May 15 00:29:22.352820 containerd[1440]: time="2025-05-15T00:29:22.352706328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:29:22.356937 containerd[1440]: time="2025-05-15T00:29:22.356895331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 15 00:29:22.357887 containerd[1440]: time="2025-05-15T00:29:22.357857251Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:29:22.360192 containerd[1440]: time="2025-05-15T00:29:22.360139773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:29:22.361208 containerd[1440]: time="2025-05-15T00:29:22.360863613Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 1.336558307s" May 15 00:29:22.361208 containerd[1440]: time="2025-05-15T00:29:22.360902493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 15 00:29:22.362724 containerd[1440]: time="2025-05-15T00:29:22.362702534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 15 00:29:22.365502 containerd[1440]: time="2025-05-15T00:29:22.364613896Z" level=info msg="CreateContainer within sandbox \"b7e2a2883c0b8d477d8432a7953a6614748a6f8a23e89905386636291dd552d6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 15 00:29:22.378968 containerd[1440]: time="2025-05-15T00:29:22.378921345Z" level=info msg="CreateContainer within sandbox \"b7e2a2883c0b8d477d8432a7953a6614748a6f8a23e89905386636291dd552d6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"fb0e2d1127e7f7ef611c3830378cde02ddaba1d2b3747132888b95b41c7df726\"" May 15 00:29:22.379480 containerd[1440]: time="2025-05-15T00:29:22.379405425Z" level=info msg="StartContainer for \"fb0e2d1127e7f7ef611c3830378cde02ddaba1d2b3747132888b95b41c7df726\"" May 15 00:29:22.414001 systemd[1]: Started cri-containerd-fb0e2d1127e7f7ef611c3830378cde02ddaba1d2b3747132888b95b41c7df726.scope - libcontainer container fb0e2d1127e7f7ef611c3830378cde02ddaba1d2b3747132888b95b41c7df726. May 15 00:29:22.452302 containerd[1440]: time="2025-05-15T00:29:22.452256392Z" level=info msg="StartContainer for \"fb0e2d1127e7f7ef611c3830378cde02ddaba1d2b3747132888b95b41c7df726\" returns successfully" May 15 00:29:22.821950 systemd-networkd[1376]: caliee1a68071ff: Gained IPv6LL May 15 00:29:22.910472 containerd[1440]: time="2025-05-15T00:29:22.910059849Z" level=info msg="StopPodSandbox for \"b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c\"" May 15 00:29:22.987247 containerd[1440]: 2025-05-15 00:29:22.951 [INFO][4862] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" May 15 00:29:22.987247 containerd[1440]: 2025-05-15 00:29:22.951 [INFO][4862] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" iface="eth0" netns="/var/run/netns/cni-02bf3152-59cb-d9cf-48b2-7f797d445c48" May 15 00:29:22.987247 containerd[1440]: 2025-05-15 00:29:22.952 [INFO][4862] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" iface="eth0" netns="/var/run/netns/cni-02bf3152-59cb-d9cf-48b2-7f797d445c48" May 15 00:29:22.987247 containerd[1440]: 2025-05-15 00:29:22.952 [INFO][4862] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" iface="eth0" netns="/var/run/netns/cni-02bf3152-59cb-d9cf-48b2-7f797d445c48" May 15 00:29:22.987247 containerd[1440]: 2025-05-15 00:29:22.952 [INFO][4862] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" May 15 00:29:22.987247 containerd[1440]: 2025-05-15 00:29:22.952 [INFO][4862] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" May 15 00:29:22.987247 containerd[1440]: 2025-05-15 00:29:22.970 [INFO][4871] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" HandleID="k8s-pod-network.b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" Workload="localhost-k8s-calico--apiserver--79d7d69dbd--j887d-eth0" May 15 00:29:22.987247 containerd[1440]: 2025-05-15 00:29:22.970 [INFO][4871] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:29:22.987247 containerd[1440]: 2025-05-15 00:29:22.970 [INFO][4871] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:29:22.987247 containerd[1440]: 2025-05-15 00:29:22.981 [WARNING][4871] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" HandleID="k8s-pod-network.b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" Workload="localhost-k8s-calico--apiserver--79d7d69dbd--j887d-eth0" May 15 00:29:22.987247 containerd[1440]: 2025-05-15 00:29:22.981 [INFO][4871] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" HandleID="k8s-pod-network.b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" Workload="localhost-k8s-calico--apiserver--79d7d69dbd--j887d-eth0" May 15 00:29:22.987247 containerd[1440]: 2025-05-15 00:29:22.983 [INFO][4871] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:29:22.987247 containerd[1440]: 2025-05-15 00:29:22.985 [INFO][4862] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" May 15 00:29:22.988066 containerd[1440]: time="2025-05-15T00:29:22.987457379Z" level=info msg="TearDown network for sandbox \"b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c\" successfully" May 15 00:29:22.988066 containerd[1440]: time="2025-05-15T00:29:22.987490859Z" level=info msg="StopPodSandbox for \"b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c\" returns successfully" May 15 00:29:22.988177 containerd[1440]: time="2025-05-15T00:29:22.988151219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d7d69dbd-j887d,Uid:1c672a4f-dc2d-49d4-92b2-aa1858123efa,Namespace:calico-apiserver,Attempt:1,}" May 15 00:29:22.989013 kubelet[2466]: I0515 00:29:22.988991 2466 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 15 00:29:22.993442 kubelet[2466]: I0515 00:29:22.993415 2466 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 15 00:29:23.078917 systemd-networkd[1376]: cali2fc2c114722: Gained IPv6LL May 15 00:29:23.082229 kubelet[2466]: E0515 00:29:23.081895 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:23.095073 kubelet[2466]: I0515 00:29:23.094946 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-gtg4n" podStartSLOduration=34.094930925 podStartE2EDuration="34.094930925s" podCreationTimestamp="2025-05-15 00:28:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:29:23.093600684 +0000 UTC m=+39.262409082" watchObservedRunningTime="2025-05-15 00:29:23.094930925 +0000 UTC m=+39.263739323" May 15 00:29:23.108331 kubelet[2466]: I0515 00:29:23.108256 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-64gbz" podStartSLOduration=25.225449383 podStartE2EDuration="28.108238213s" podCreationTimestamp="2025-05-15 00:28:55 +0000 UTC" firstStartedPulling="2025-05-15 00:29:19.479140184 +0000 UTC m=+35.647948582" lastFinishedPulling="2025-05-15 00:29:22.361929054 +0000 UTC m=+38.530737412" observedRunningTime="2025-05-15 00:29:23.107530332 +0000 UTC m=+39.276338690" watchObservedRunningTime="2025-05-15 00:29:23.108238213 +0000 UTC m=+39.277046611" May 15 00:29:23.156181 systemd[1]: run-netns-cni\x2d02bf3152\x2d59cb\x2dd9cf\x2d48b2\x2d7f797d445c48.mount: Deactivated successfully. May 15 00:29:23.254943 systemd-networkd[1376]: cali747cd19de0b: Link UP May 15 00:29:23.255201 systemd-networkd[1376]: cali747cd19de0b: Gained carrier May 15 00:29:23.275304 containerd[1440]: 2025-05-15 00:29:23.039 [INFO][4880] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--79d7d69dbd--j887d-eth0 calico-apiserver-79d7d69dbd- calico-apiserver 1c672a4f-dc2d-49d4-92b2-aa1858123efa 895 0 2025-05-15 00:28:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79d7d69dbd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-79d7d69dbd-j887d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali747cd19de0b [] []}} ContainerID="8a6a979fd28df260541480d26ffddeb4511d0b8381328161e4de9af84a33af52" Namespace="calico-apiserver" Pod="calico-apiserver-79d7d69dbd-j887d" WorkloadEndpoint="localhost-k8s-calico--apiserver--79d7d69dbd--j887d-" May 15 00:29:23.275304 containerd[1440]: 2025-05-15 00:29:23.039 [INFO][4880] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8a6a979fd28df260541480d26ffddeb4511d0b8381328161e4de9af84a33af52" Namespace="calico-apiserver" Pod="calico-apiserver-79d7d69dbd-j887d" WorkloadEndpoint="localhost-k8s-calico--apiserver--79d7d69dbd--j887d-eth0" May 15 00:29:23.275304 containerd[1440]: 2025-05-15 00:29:23.072 [INFO][4894] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a6a979fd28df260541480d26ffddeb4511d0b8381328161e4de9af84a33af52" HandleID="k8s-pod-network.8a6a979fd28df260541480d26ffddeb4511d0b8381328161e4de9af84a33af52" Workload="localhost-k8s-calico--apiserver--79d7d69dbd--j887d-eth0" May 15 00:29:23.275304 containerd[1440]: 2025-05-15 00:29:23.216 [INFO][4894] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8a6a979fd28df260541480d26ffddeb4511d0b8381328161e4de9af84a33af52" HandleID="k8s-pod-network.8a6a979fd28df260541480d26ffddeb4511d0b8381328161e4de9af84a33af52" Workload="localhost-k8s-calico--apiserver--79d7d69dbd--j887d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400011ce20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-79d7d69dbd-j887d", "timestamp":"2025-05-15 00:29:23.072911591 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:29:23.275304 containerd[1440]: 2025-05-15 00:29:23.216 [INFO][4894] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:29:23.275304 containerd[1440]: 2025-05-15 00:29:23.216 [INFO][4894] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:29:23.275304 containerd[1440]: 2025-05-15 00:29:23.216 [INFO][4894] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 00:29:23.275304 containerd[1440]: 2025-05-15 00:29:23.219 [INFO][4894] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8a6a979fd28df260541480d26ffddeb4511d0b8381328161e4de9af84a33af52" host="localhost" May 15 00:29:23.275304 containerd[1440]: 2025-05-15 00:29:23.224 [INFO][4894] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 00:29:23.275304 containerd[1440]: 2025-05-15 00:29:23.229 [INFO][4894] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 00:29:23.275304 containerd[1440]: 2025-05-15 00:29:23.232 [INFO][4894] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 00:29:23.275304 containerd[1440]: 2025-05-15 00:29:23.235 [INFO][4894] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 00:29:23.275304 containerd[1440]: 2025-05-15 00:29:23.235 [INFO][4894] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8a6a979fd28df260541480d26ffddeb4511d0b8381328161e4de9af84a33af52" host="localhost" May 15 00:29:23.275304 containerd[1440]: 2025-05-15 00:29:23.237 [INFO][4894] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8a6a979fd28df260541480d26ffddeb4511d0b8381328161e4de9af84a33af52 May 15 00:29:23.275304 containerd[1440]: 2025-05-15 00:29:23.241 [INFO][4894] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8a6a979fd28df260541480d26ffddeb4511d0b8381328161e4de9af84a33af52" host="localhost" May 15 00:29:23.275304 containerd[1440]: 2025-05-15 00:29:23.250 [INFO][4894] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.8a6a979fd28df260541480d26ffddeb4511d0b8381328161e4de9af84a33af52" host="localhost" May 15 00:29:23.275304 containerd[1440]: 2025-05-15 00:29:23.250 [INFO][4894] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.8a6a979fd28df260541480d26ffddeb4511d0b8381328161e4de9af84a33af52" host="localhost" May 15 00:29:23.275304 containerd[1440]: 2025-05-15 00:29:23.250 [INFO][4894] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:29:23.275304 containerd[1440]: 2025-05-15 00:29:23.250 [INFO][4894] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="8a6a979fd28df260541480d26ffddeb4511d0b8381328161e4de9af84a33af52" HandleID="k8s-pod-network.8a6a979fd28df260541480d26ffddeb4511d0b8381328161e4de9af84a33af52" Workload="localhost-k8s-calico--apiserver--79d7d69dbd--j887d-eth0" May 15 00:29:23.275933 containerd[1440]: 2025-05-15 00:29:23.253 [INFO][4880] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8a6a979fd28df260541480d26ffddeb4511d0b8381328161e4de9af84a33af52" Namespace="calico-apiserver" Pod="calico-apiserver-79d7d69dbd-j887d" WorkloadEndpoint="localhost-k8s-calico--apiserver--79d7d69dbd--j887d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79d7d69dbd--j887d-eth0", GenerateName:"calico-apiserver-79d7d69dbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c672a4f-dc2d-49d4-92b2-aa1858123efa", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 28, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d7d69dbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-79d7d69dbd-j887d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali747cd19de0b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:29:23.275933 containerd[1440]: 2025-05-15 00:29:23.253 [INFO][4880] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="8a6a979fd28df260541480d26ffddeb4511d0b8381328161e4de9af84a33af52" Namespace="calico-apiserver" Pod="calico-apiserver-79d7d69dbd-j887d" WorkloadEndpoint="localhost-k8s-calico--apiserver--79d7d69dbd--j887d-eth0" May 15 00:29:23.275933 containerd[1440]: 2025-05-15 00:29:23.253 [INFO][4880] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali747cd19de0b ContainerID="8a6a979fd28df260541480d26ffddeb4511d0b8381328161e4de9af84a33af52" Namespace="calico-apiserver" Pod="calico-apiserver-79d7d69dbd-j887d" WorkloadEndpoint="localhost-k8s-calico--apiserver--79d7d69dbd--j887d-eth0" May 15 00:29:23.275933 containerd[1440]: 2025-05-15 00:29:23.255 [INFO][4880] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a6a979fd28df260541480d26ffddeb4511d0b8381328161e4de9af84a33af52" Namespace="calico-apiserver" Pod="calico-apiserver-79d7d69dbd-j887d" WorkloadEndpoint="localhost-k8s-calico--apiserver--79d7d69dbd--j887d-eth0" May 15 00:29:23.275933 containerd[1440]: 2025-05-15 00:29:23.256 [INFO][4880] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8a6a979fd28df260541480d26ffddeb4511d0b8381328161e4de9af84a33af52" Namespace="calico-apiserver" Pod="calico-apiserver-79d7d69dbd-j887d" WorkloadEndpoint="localhost-k8s-calico--apiserver--79d7d69dbd--j887d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79d7d69dbd--j887d-eth0", GenerateName:"calico-apiserver-79d7d69dbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c672a4f-dc2d-49d4-92b2-aa1858123efa", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 28, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d7d69dbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a6a979fd28df260541480d26ffddeb4511d0b8381328161e4de9af84a33af52", Pod:"calico-apiserver-79d7d69dbd-j887d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali747cd19de0b", MAC:"f2:60:1f:5a:81:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:29:23.275933 containerd[1440]: 2025-05-15 00:29:23.268 [INFO][4880] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8a6a979fd28df260541480d26ffddeb4511d0b8381328161e4de9af84a33af52" Namespace="calico-apiserver" Pod="calico-apiserver-79d7d69dbd-j887d" WorkloadEndpoint="localhost-k8s-calico--apiserver--79d7d69dbd--j887d-eth0" May 15 00:29:23.310734 containerd[1440]: time="2025-05-15T00:29:23.310361455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:29:23.310734 containerd[1440]: time="2025-05-15T00:29:23.310421295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:29:23.310734 containerd[1440]: time="2025-05-15T00:29:23.310436615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:29:23.310734 containerd[1440]: time="2025-05-15T00:29:23.310594775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:29:23.338983 systemd[1]: Started cri-containerd-8a6a979fd28df260541480d26ffddeb4511d0b8381328161e4de9af84a33af52.scope - libcontainer container 8a6a979fd28df260541480d26ffddeb4511d0b8381328161e4de9af84a33af52. May 15 00:29:23.350122 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:29:23.367708 containerd[1440]: time="2025-05-15T00:29:23.367584250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79d7d69dbd-j887d,Uid:1c672a4f-dc2d-49d4-92b2-aa1858123efa,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8a6a979fd28df260541480d26ffddeb4511d0b8381328161e4de9af84a33af52\"" May 15 00:29:23.974088 systemd-networkd[1376]: cali5a95f8aeb86: Gained IPv6LL May 15 00:29:24.019320 containerd[1440]: time="2025-05-15T00:29:24.019270685Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:29:24.020471 containerd[1440]: time="2025-05-15T00:29:24.020221765Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.3: active requests=0, bytes read=32554116" May 15 00:29:24.021681 containerd[1440]: time="2025-05-15T00:29:24.021631526Z" level=info msg="ImageCreate event name:\"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:29:24.024571 containerd[1440]: time="2025-05-15T00:29:24.024520448Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:29:24.025142 containerd[1440]: time="2025-05-15T00:29:24.025084448Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" with image id \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9\", size \"33923266\" in 1.662177953s" May 15 00:29:24.025142 containerd[1440]: time="2025-05-15T00:29:24.025126088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:ec7c64189a2fd01b24b044fea1840d441e9884a0df32c2e9d6982cfbbea1f814\"" May 15 00:29:24.036764 containerd[1440]: time="2025-05-15T00:29:24.036713055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 00:29:24.045378 containerd[1440]: time="2025-05-15T00:29:24.045063899Z" level=info msg="CreateContainer within sandbox \"ea6bab460062e34d7a5da1b2a5fcc9f7543f45543c0d14e13b47032782a5ffb9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 15 00:29:24.059666 containerd[1440]: time="2025-05-15T00:29:24.059614068Z" level=info msg="CreateContainer within sandbox \"ea6bab460062e34d7a5da1b2a5fcc9f7543f45543c0d14e13b47032782a5ffb9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c168b56c3bb0194b5f147e916bd6255990153f4a42be2cfa8513197a0fea21d4\"" May 15 00:29:24.061762 containerd[1440]: time="2025-05-15T00:29:24.060731348Z" level=info msg="StartContainer for \"c168b56c3bb0194b5f147e916bd6255990153f4a42be2cfa8513197a0fea21d4\"" May 15 00:29:24.100372 kubelet[2466]: E0515 00:29:24.100327 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:24.105962 systemd[1]: Started cri-containerd-c168b56c3bb0194b5f147e916bd6255990153f4a42be2cfa8513197a0fea21d4.scope - libcontainer container c168b56c3bb0194b5f147e916bd6255990153f4a42be2cfa8513197a0fea21d4. May 15 00:29:24.138851 containerd[1440]: time="2025-05-15T00:29:24.138801913Z" level=info msg="StartContainer for \"c168b56c3bb0194b5f147e916bd6255990153f4a42be2cfa8513197a0fea21d4\" returns successfully" May 15 00:29:24.742095 systemd-networkd[1376]: cali747cd19de0b: Gained IPv6LL May 15 00:29:25.106042 kubelet[2466]: E0515 00:29:25.105932 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:25.120649 kubelet[2466]: I0515 00:29:25.120385 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-84874c75bf-xm54f" podStartSLOduration=27.366900058 podStartE2EDuration="30.120355027s" podCreationTimestamp="2025-05-15 00:28:55 +0000 UTC" firstStartedPulling="2025-05-15 00:29:21.283056725 +0000 UTC m=+37.451865123" lastFinishedPulling="2025-05-15 00:29:24.036511734 +0000 UTC m=+40.205320092" observedRunningTime="2025-05-15 00:29:25.120283227 +0000 UTC m=+41.289091625" watchObservedRunningTime="2025-05-15 00:29:25.120355027 +0000 UTC m=+41.289163385" May 15 00:29:25.260628 systemd[1]: Started sshd@9-10.0.0.112:22-10.0.0.1:50164.service - OpenSSH per-connection server daemon (10.0.0.1:50164). May 15 00:29:25.313837 sshd[5014]: Accepted publickey for core from 10.0.0.1 port 50164 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:29:25.317470 sshd[5014]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:29:25.323335 systemd-logind[1420]: New session 10 of user core. May 15 00:29:25.329941 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 00:29:25.616929 sshd[5014]: pam_unix(sshd:session): session closed for user core May 15 00:29:25.624535 systemd[1]: sshd@9-10.0.0.112:22-10.0.0.1:50164.service: Deactivated successfully. May 15 00:29:25.627197 systemd[1]: session-10.scope: Deactivated successfully. May 15 00:29:25.630140 systemd-logind[1420]: Session 10 logged out. Waiting for processes to exit. May 15 00:29:25.636449 systemd[1]: Started sshd@10-10.0.0.112:22-10.0.0.1:50174.service - OpenSSH per-connection server daemon (10.0.0.1:50174). May 15 00:29:25.638869 systemd-logind[1420]: Removed session 10. May 15 00:29:25.687785 sshd[5033]: Accepted publickey for core from 10.0.0.1 port 50174 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:29:25.689669 sshd[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:29:25.697234 systemd-logind[1420]: New session 11 of user core. May 15 00:29:25.705111 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 00:29:26.040272 sshd[5033]: pam_unix(sshd:session): session closed for user core May 15 00:29:26.048501 systemd[1]: sshd@10-10.0.0.112:22-10.0.0.1:50174.service: Deactivated successfully. May 15 00:29:26.052744 systemd[1]: session-11.scope: Deactivated successfully. May 15 00:29:26.056036 systemd-logind[1420]: Session 11 logged out. Waiting for processes to exit. May 15 00:29:26.059399 systemd-logind[1420]: Removed session 11. May 15 00:29:26.068708 systemd[1]: Started sshd@11-10.0.0.112:22-10.0.0.1:50176.service - OpenSSH per-connection server daemon (10.0.0.1:50176). May 15 00:29:26.107439 kubelet[2466]: I0515 00:29:26.107403 2466 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:29:26.146328 sshd[5045]: Accepted publickey for core from 10.0.0.1 port 50176 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:29:26.148062 sshd[5045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:29:26.152786 systemd-logind[1420]: New session 12 of user core. May 15 00:29:26.167979 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 00:29:26.251820 containerd[1440]: time="2025-05-15T00:29:26.251399301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:29:26.264228 containerd[1440]: time="2025-05-15T00:29:26.264164948Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 15 00:29:26.278765 containerd[1440]: time="2025-05-15T00:29:26.278722475Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:29:26.303893 containerd[1440]: time="2025-05-15T00:29:26.303629688Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 2.266869433s" May 15 00:29:26.303893 containerd[1440]: time="2025-05-15T00:29:26.303680568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 15 00:29:26.304287 containerd[1440]: time="2025-05-15T00:29:26.304128648Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:29:26.306411 containerd[1440]: time="2025-05-15T00:29:26.306309569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 00:29:26.309153 containerd[1440]: time="2025-05-15T00:29:26.309030970Z" level=info msg="CreateContainer within sandbox \"22fd29bfb2fec9c0ae9e3a5ec38177a835c9887ee14fdea8d742d773670b5226\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 00:29:26.370833 sshd[5045]: pam_unix(sshd:session): session closed for user core May 15 00:29:26.376745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3420579648.mount: Deactivated successfully. May 15 00:29:26.377672 systemd[1]: sshd@11-10.0.0.112:22-10.0.0.1:50176.service: Deactivated successfully. May 15 00:29:26.379713 systemd[1]: session-12.scope: Deactivated successfully. May 15 00:29:26.381318 containerd[1440]: time="2025-05-15T00:29:26.381218286Z" level=info msg="CreateContainer within sandbox \"22fd29bfb2fec9c0ae9e3a5ec38177a835c9887ee14fdea8d742d773670b5226\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5b1e9a7e562c99282853d38ac6177a73a321999275a107cfbb511833376e8b40\"" May 15 00:29:26.381748 systemd-logind[1420]: Session 12 logged out. Waiting for processes to exit. May 15 00:29:26.382582 containerd[1440]: time="2025-05-15T00:29:26.382322927Z" level=info msg="StartContainer for \"5b1e9a7e562c99282853d38ac6177a73a321999275a107cfbb511833376e8b40\"" May 15 00:29:26.386912 systemd-logind[1420]: Removed session 12. May 15 00:29:26.411986 systemd[1]: Started cri-containerd-5b1e9a7e562c99282853d38ac6177a73a321999275a107cfbb511833376e8b40.scope - libcontainer container 5b1e9a7e562c99282853d38ac6177a73a321999275a107cfbb511833376e8b40. May 15 00:29:26.443702 containerd[1440]: time="2025-05-15T00:29:26.443607398Z" level=info msg="StartContainer for \"5b1e9a7e562c99282853d38ac6177a73a321999275a107cfbb511833376e8b40\" returns successfully" May 15 00:29:26.638727 containerd[1440]: time="2025-05-15T00:29:26.637918095Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:29:26.639924 containerd[1440]: time="2025-05-15T00:29:26.639892936Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=77" May 15 00:29:26.642264 containerd[1440]: time="2025-05-15T00:29:26.642221577Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 335.093608ms" May 15 00:29:26.642398 containerd[1440]: time="2025-05-15T00:29:26.642381177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 15 00:29:26.644511 containerd[1440]: time="2025-05-15T00:29:26.644482218Z" level=info msg="CreateContainer within sandbox \"8a6a979fd28df260541480d26ffddeb4511d0b8381328161e4de9af84a33af52\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 00:29:26.656758 containerd[1440]: time="2025-05-15T00:29:26.656607304Z" level=info msg="CreateContainer within sandbox \"8a6a979fd28df260541480d26ffddeb4511d0b8381328161e4de9af84a33af52\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1bbaf150e62f48ca104189c6df7e7d00da7251ee69cf57746afd74e03bdf4a30\"" May 15 00:29:26.657544 containerd[1440]: time="2025-05-15T00:29:26.657318544Z" level=info msg="StartContainer for \"1bbaf150e62f48ca104189c6df7e7d00da7251ee69cf57746afd74e03bdf4a30\"" May 15 00:29:26.683988 systemd[1]: Started cri-containerd-1bbaf150e62f48ca104189c6df7e7d00da7251ee69cf57746afd74e03bdf4a30.scope - libcontainer container 1bbaf150e62f48ca104189c6df7e7d00da7251ee69cf57746afd74e03bdf4a30. May 15 00:29:26.779799 containerd[1440]: time="2025-05-15T00:29:26.779351925Z" level=info msg="StartContainer for \"1bbaf150e62f48ca104189c6df7e7d00da7251ee69cf57746afd74e03bdf4a30\" returns successfully" May 15 00:29:27.128012 kubelet[2466]: I0515 00:29:27.127481 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79d7d69dbd-qwhnz" podStartSLOduration=27.232084539 podStartE2EDuration="32.127466375s" podCreationTimestamp="2025-05-15 00:28:55 +0000 UTC" firstStartedPulling="2025-05-15 00:29:21.410762573 +0000 UTC m=+37.579570931" lastFinishedPulling="2025-05-15 00:29:26.306144409 +0000 UTC m=+42.474952767" observedRunningTime="2025-05-15 00:29:27.126469215 +0000 UTC m=+43.295277613" watchObservedRunningTime="2025-05-15 00:29:27.127466375 +0000 UTC m=+43.296274773" May 15 00:29:27.138368 kubelet[2466]: I0515 00:29:27.137376 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79d7d69dbd-j887d" podStartSLOduration=28.863052414 podStartE2EDuration="32.13735918s" podCreationTimestamp="2025-05-15 00:28:55 +0000 UTC" firstStartedPulling="2025-05-15 00:29:23.368860691 +0000 UTC m=+39.537669089" lastFinishedPulling="2025-05-15 00:29:26.643167457 +0000 UTC m=+42.811975855" observedRunningTime="2025-05-15 00:29:27.13660246 +0000 UTC m=+43.305410858" watchObservedRunningTime="2025-05-15 00:29:27.13735918 +0000 UTC m=+43.306167578" May 15 00:29:28.120154 kubelet[2466]: I0515 00:29:28.120107 2466 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:29:28.120553 kubelet[2466]: I0515 00:29:28.120349 2466 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:29:29.121739 kubelet[2466]: I0515 00:29:29.121700 2466 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:29:31.385421 systemd[1]: Started sshd@12-10.0.0.112:22-10.0.0.1:50190.service - OpenSSH per-connection server daemon (10.0.0.1:50190). May 15 00:29:31.443077 sshd[5158]: Accepted publickey for core from 10.0.0.1 port 50190 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:29:31.445484 sshd[5158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:29:31.450584 systemd-logind[1420]: New session 13 of user core. May 15 00:29:31.463025 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 00:29:31.667897 sshd[5158]: pam_unix(sshd:session): session closed for user core May 15 00:29:31.680557 systemd[1]: sshd@12-10.0.0.112:22-10.0.0.1:50190.service: Deactivated successfully. May 15 00:29:31.682577 systemd[1]: session-13.scope: Deactivated successfully. May 15 00:29:31.684241 systemd-logind[1420]: Session 13 logged out. Waiting for processes to exit. May 15 00:29:31.691153 systemd[1]: Started sshd@13-10.0.0.112:22-10.0.0.1:50192.service - OpenSSH per-connection server daemon (10.0.0.1:50192). May 15 00:29:31.692223 systemd-logind[1420]: Removed session 13. May 15 00:29:31.727614 sshd[5173]: Accepted publickey for core from 10.0.0.1 port 50192 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:29:31.729529 sshd[5173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:29:31.733791 systemd-logind[1420]: New session 14 of user core. May 15 00:29:31.743978 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 00:29:31.970687 sshd[5173]: pam_unix(sshd:session): session closed for user core May 15 00:29:31.980200 systemd[1]: sshd@13-10.0.0.112:22-10.0.0.1:50192.service: Deactivated successfully. May 15 00:29:31.982750 systemd[1]: session-14.scope: Deactivated successfully. May 15 00:29:31.983570 systemd-logind[1420]: Session 14 logged out. Waiting for processes to exit. May 15 00:29:31.986294 systemd[1]: Started sshd@14-10.0.0.112:22-10.0.0.1:50208.service - OpenSSH per-connection server daemon (10.0.0.1:50208). May 15 00:29:31.988018 systemd-logind[1420]: Removed session 14. May 15 00:29:32.033208 sshd[5185]: Accepted publickey for core from 10.0.0.1 port 50208 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:29:32.034934 sshd[5185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:29:32.039856 systemd-logind[1420]: New session 15 of user core. May 15 00:29:32.048026 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 00:29:33.647565 sshd[5185]: pam_unix(sshd:session): session closed for user core May 15 00:29:33.657627 systemd[1]: sshd@14-10.0.0.112:22-10.0.0.1:50208.service: Deactivated successfully. May 15 00:29:33.662458 systemd[1]: session-15.scope: Deactivated successfully. May 15 00:29:33.664738 systemd-logind[1420]: Session 15 logged out. Waiting for processes to exit. May 15 00:29:33.678434 systemd[1]: Started sshd@15-10.0.0.112:22-10.0.0.1:43008.service - OpenSSH per-connection server daemon (10.0.0.1:43008). May 15 00:29:33.680876 systemd-logind[1420]: Removed session 15. May 15 00:29:33.719673 sshd[5209]: Accepted publickey for core from 10.0.0.1 port 43008 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:29:33.721498 sshd[5209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:29:33.726363 systemd-logind[1420]: New session 16 of user core. May 15 00:29:33.736954 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 00:29:34.141134 sshd[5209]: pam_unix(sshd:session): session closed for user core May 15 00:29:34.148509 systemd[1]: sshd@15-10.0.0.112:22-10.0.0.1:43008.service: Deactivated successfully. May 15 00:29:34.151218 systemd[1]: session-16.scope: Deactivated successfully. May 15 00:29:34.152716 systemd-logind[1420]: Session 16 logged out. Waiting for processes to exit. May 15 00:29:34.166046 systemd[1]: Started sshd@16-10.0.0.112:22-10.0.0.1:43022.service - OpenSSH per-connection server daemon (10.0.0.1:43022). May 15 00:29:34.166926 systemd-logind[1420]: Removed session 16. May 15 00:29:34.202992 sshd[5224]: Accepted publickey for core from 10.0.0.1 port 43022 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:29:34.204476 sshd[5224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:29:34.208373 systemd-logind[1420]: New session 17 of user core. May 15 00:29:34.218911 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 00:29:34.344535 sshd[5224]: pam_unix(sshd:session): session closed for user core May 15 00:29:34.347755 systemd[1]: sshd@16-10.0.0.112:22-10.0.0.1:43022.service: Deactivated successfully. May 15 00:29:34.350498 systemd[1]: session-17.scope: Deactivated successfully. May 15 00:29:34.351056 systemd-logind[1420]: Session 17 logged out. Waiting for processes to exit. May 15 00:29:34.351956 systemd-logind[1420]: Removed session 17. May 15 00:29:34.718661 kubelet[2466]: I0515 00:29:34.718611 2466 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:29:38.112472 kubelet[2466]: I0515 00:29:38.112393 2466 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:29:39.357010 systemd[1]: Started sshd@17-10.0.0.112:22-10.0.0.1:43032.service - OpenSSH per-connection server daemon (10.0.0.1:43032). May 15 00:29:39.398565 sshd[5285]: Accepted publickey for core from 10.0.0.1 port 43032 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:29:39.399932 sshd[5285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:29:39.404351 systemd-logind[1420]: New session 18 of user core. May 15 00:29:39.411971 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 00:29:39.570272 sshd[5285]: pam_unix(sshd:session): session closed for user core May 15 00:29:39.574123 systemd[1]: sshd@17-10.0.0.112:22-10.0.0.1:43032.service: Deactivated successfully. May 15 00:29:39.576094 systemd[1]: session-18.scope: Deactivated successfully. May 15 00:29:39.576709 systemd-logind[1420]: Session 18 logged out. Waiting for processes to exit. May 15 00:29:39.577524 systemd-logind[1420]: Removed session 18. May 15 00:29:43.448849 kubelet[2466]: E0515 00:29:43.448748 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:29:43.907288 containerd[1440]: time="2025-05-15T00:29:43.907230256Z" level=info msg="StopPodSandbox for \"7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9\"" May 15 00:29:43.989338 containerd[1440]: 2025-05-15 00:29:43.954 [WARNING][5344] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--v6wlt-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"936f2675-5caf-4d04-b8fa-7b40aacfa39d", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 28, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c6e31ae61a8823cafe947fa86259fedaab20e9f65fdcc5ab0062a65a28a36ce9", Pod:"coredns-6f6b679f8f-v6wlt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e88a07bd86", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:29:43.989338 containerd[1440]: 2025-05-15 00:29:43.954 [INFO][5344] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" May 15 00:29:43.989338 containerd[1440]: 2025-05-15 00:29:43.955 [INFO][5344] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" iface="eth0" netns="" May 15 00:29:43.989338 containerd[1440]: 2025-05-15 00:29:43.955 [INFO][5344] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" May 15 00:29:43.989338 containerd[1440]: 2025-05-15 00:29:43.955 [INFO][5344] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" May 15 00:29:43.989338 containerd[1440]: 2025-05-15 00:29:43.974 [INFO][5353] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" HandleID="k8s-pod-network.7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" Workload="localhost-k8s-coredns--6f6b679f8f--v6wlt-eth0" May 15 00:29:43.989338 containerd[1440]: 2025-05-15 00:29:43.975 [INFO][5353] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:29:43.989338 containerd[1440]: 2025-05-15 00:29:43.975 [INFO][5353] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:29:43.989338 containerd[1440]: 2025-05-15 00:29:43.983 [WARNING][5353] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" HandleID="k8s-pod-network.7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" Workload="localhost-k8s-coredns--6f6b679f8f--v6wlt-eth0" May 15 00:29:43.989338 containerd[1440]: 2025-05-15 00:29:43.983 [INFO][5353] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" HandleID="k8s-pod-network.7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" Workload="localhost-k8s-coredns--6f6b679f8f--v6wlt-eth0" May 15 00:29:43.989338 containerd[1440]: 2025-05-15 00:29:43.984 [INFO][5353] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:29:43.989338 containerd[1440]: 2025-05-15 00:29:43.986 [INFO][5344] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" May 15 00:29:43.989732 containerd[1440]: time="2025-05-15T00:29:43.989388670Z" level=info msg="TearDown network for sandbox \"7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9\" successfully" May 15 00:29:43.989732 containerd[1440]: time="2025-05-15T00:29:43.989416190Z" level=info msg="StopPodSandbox for \"7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9\" returns successfully" May 15 00:29:43.990222 containerd[1440]: time="2025-05-15T00:29:43.990182310Z" level=info msg="RemovePodSandbox for \"7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9\"" May 15 00:29:44.000139 containerd[1440]: time="2025-05-15T00:29:44.000075712Z" level=info msg="Forcibly stopping sandbox \"7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9\"" May 15 00:29:44.070360 containerd[1440]: 2025-05-15 00:29:44.036 [WARNING][5375] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--v6wlt-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"936f2675-5caf-4d04-b8fa-7b40aacfa39d", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 28, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c6e31ae61a8823cafe947fa86259fedaab20e9f65fdcc5ab0062a65a28a36ce9", Pod:"coredns-6f6b679f8f-v6wlt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e88a07bd86", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:29:44.070360 containerd[1440]: 2025-05-15 00:29:44.036 [INFO][5375] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" May 15 00:29:44.070360 containerd[1440]: 2025-05-15 00:29:44.036 [INFO][5375] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" iface="eth0" netns="" May 15 00:29:44.070360 containerd[1440]: 2025-05-15 00:29:44.036 [INFO][5375] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" May 15 00:29:44.070360 containerd[1440]: 2025-05-15 00:29:44.036 [INFO][5375] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" May 15 00:29:44.070360 containerd[1440]: 2025-05-15 00:29:44.057 [INFO][5384] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" HandleID="k8s-pod-network.7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" Workload="localhost-k8s-coredns--6f6b679f8f--v6wlt-eth0" May 15 00:29:44.070360 containerd[1440]: 2025-05-15 00:29:44.057 [INFO][5384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:29:44.070360 containerd[1440]: 2025-05-15 00:29:44.057 [INFO][5384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:29:44.070360 containerd[1440]: 2025-05-15 00:29:44.065 [WARNING][5384] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" HandleID="k8s-pod-network.7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" Workload="localhost-k8s-coredns--6f6b679f8f--v6wlt-eth0" May 15 00:29:44.070360 containerd[1440]: 2025-05-15 00:29:44.065 [INFO][5384] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" HandleID="k8s-pod-network.7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" Workload="localhost-k8s-coredns--6f6b679f8f--v6wlt-eth0" May 15 00:29:44.070360 containerd[1440]: 2025-05-15 00:29:44.067 [INFO][5384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:29:44.070360 containerd[1440]: 2025-05-15 00:29:44.068 [INFO][5375] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9" May 15 00:29:44.071346 containerd[1440]: time="2025-05-15T00:29:44.070844923Z" level=info msg="TearDown network for sandbox \"7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9\" successfully" May 15 00:29:44.075176 containerd[1440]: time="2025-05-15T00:29:44.075141603Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:29:44.075313 containerd[1440]: time="2025-05-15T00:29:44.075295963Z" level=info msg="RemovePodSandbox \"7b795c269f812cf1f162548c08fd132d06641a8b9fef1ca4d47ba5f5f8098ed9\" returns successfully" May 15 00:29:44.075956 containerd[1440]: time="2025-05-15T00:29:44.075929804Z" level=info msg="StopPodSandbox for \"7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d\"" May 15 00:29:44.156746 containerd[1440]: 2025-05-15 00:29:44.118 [WARNING][5407] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--64gbz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f966a81f-0997-40a6-9fea-29e50dec8072", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 28, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b7e2a2883c0b8d477d8432a7953a6614748a6f8a23e89905386636291dd552d6", Pod:"csi-node-driver-64gbz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif9079e85f98", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:29:44.156746 containerd[1440]: 2025-05-15 00:29:44.118 [INFO][5407] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" May 15 00:29:44.156746 containerd[1440]: 2025-05-15 00:29:44.118 [INFO][5407] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" iface="eth0" netns="" May 15 00:29:44.156746 containerd[1440]: 2025-05-15 00:29:44.118 [INFO][5407] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" May 15 00:29:44.156746 containerd[1440]: 2025-05-15 00:29:44.118 [INFO][5407] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" May 15 00:29:44.156746 containerd[1440]: 2025-05-15 00:29:44.139 [INFO][5416] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" HandleID="k8s-pod-network.7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" Workload="localhost-k8s-csi--node--driver--64gbz-eth0" May 15 00:29:44.156746 containerd[1440]: 2025-05-15 00:29:44.140 [INFO][5416] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:29:44.156746 containerd[1440]: 2025-05-15 00:29:44.140 [INFO][5416] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:29:44.156746 containerd[1440]: 2025-05-15 00:29:44.150 [WARNING][5416] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" HandleID="k8s-pod-network.7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" Workload="localhost-k8s-csi--node--driver--64gbz-eth0" May 15 00:29:44.156746 containerd[1440]: 2025-05-15 00:29:44.150 [INFO][5416] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" HandleID="k8s-pod-network.7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" Workload="localhost-k8s-csi--node--driver--64gbz-eth0" May 15 00:29:44.156746 containerd[1440]: 2025-05-15 00:29:44.152 [INFO][5416] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:29:44.156746 containerd[1440]: 2025-05-15 00:29:44.154 [INFO][5407] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" May 15 00:29:44.157522 containerd[1440]: time="2025-05-15T00:29:44.156793256Z" level=info msg="TearDown network for sandbox \"7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d\" successfully" May 15 00:29:44.157522 containerd[1440]: time="2025-05-15T00:29:44.156821576Z" level=info msg="StopPodSandbox for \"7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d\" returns successfully" May 15 00:29:44.158025 containerd[1440]: time="2025-05-15T00:29:44.157973496Z" level=info msg="RemovePodSandbox for \"7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d\"" May 15 00:29:44.158025 containerd[1440]: time="2025-05-15T00:29:44.158007096Z" level=info msg="Forcibly stopping sandbox \"7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d\"" May 15 00:29:44.234405 containerd[1440]: 2025-05-15 00:29:44.200 [WARNING][5439] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--64gbz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f966a81f-0997-40a6-9fea-29e50dec8072", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 28, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b7e2a2883c0b8d477d8432a7953a6614748a6f8a23e89905386636291dd552d6", Pod:"csi-node-driver-64gbz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif9079e85f98", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:29:44.234405 containerd[1440]: 2025-05-15 00:29:44.200 [INFO][5439] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" May 15 00:29:44.234405 containerd[1440]: 2025-05-15 00:29:44.200 [INFO][5439] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" iface="eth0" netns="" May 15 00:29:44.234405 containerd[1440]: 2025-05-15 00:29:44.200 [INFO][5439] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" May 15 00:29:44.234405 containerd[1440]: 2025-05-15 00:29:44.200 [INFO][5439] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" May 15 00:29:44.234405 containerd[1440]: 2025-05-15 00:29:44.221 [INFO][5448] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" HandleID="k8s-pod-network.7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" Workload="localhost-k8s-csi--node--driver--64gbz-eth0" May 15 00:29:44.234405 containerd[1440]: 2025-05-15 00:29:44.221 [INFO][5448] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:29:44.234405 containerd[1440]: 2025-05-15 00:29:44.221 [INFO][5448] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:29:44.234405 containerd[1440]: 2025-05-15 00:29:44.229 [WARNING][5448] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" HandleID="k8s-pod-network.7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" Workload="localhost-k8s-csi--node--driver--64gbz-eth0" May 15 00:29:44.234405 containerd[1440]: 2025-05-15 00:29:44.229 [INFO][5448] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" HandleID="k8s-pod-network.7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" Workload="localhost-k8s-csi--node--driver--64gbz-eth0" May 15 00:29:44.234405 containerd[1440]: 2025-05-15 00:29:44.231 [INFO][5448] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:29:44.234405 containerd[1440]: 2025-05-15 00:29:44.232 [INFO][5439] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d" May 15 00:29:44.235003 containerd[1440]: time="2025-05-15T00:29:44.234436548Z" level=info msg="TearDown network for sandbox \"7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d\" successfully" May 15 00:29:44.246428 containerd[1440]: time="2025-05-15T00:29:44.246370630Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:29:44.246513 containerd[1440]: time="2025-05-15T00:29:44.246440270Z" level=info msg="RemovePodSandbox \"7557a63fdd03d76d7b5c10c19fb63564a857354c417e2401f6c70a7b416bfd6d\" returns successfully" May 15 00:29:44.247360 containerd[1440]: time="2025-05-15T00:29:44.246958990Z" level=info msg="StopPodSandbox for \"2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2\"" May 15 00:29:44.318756 containerd[1440]: 2025-05-15 00:29:44.284 [WARNING][5469] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79d7d69dbd--qwhnz-eth0", GenerateName:"calico-apiserver-79d7d69dbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"23f642bf-ba69-45cb-b737-8658c85d25d9", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 28, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d7d69dbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"22fd29bfb2fec9c0ae9e3a5ec38177a835c9887ee14fdea8d742d773670b5226", Pod:"calico-apiserver-79d7d69dbd-qwhnz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliee1a68071ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:29:44.318756 containerd[1440]: 2025-05-15 00:29:44.284 [INFO][5469] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" May 15 00:29:44.318756 containerd[1440]: 2025-05-15 00:29:44.284 [INFO][5469] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" iface="eth0" netns="" May 15 00:29:44.318756 containerd[1440]: 2025-05-15 00:29:44.284 [INFO][5469] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" May 15 00:29:44.318756 containerd[1440]: 2025-05-15 00:29:44.284 [INFO][5469] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" May 15 00:29:44.318756 containerd[1440]: 2025-05-15 00:29:44.305 [INFO][5478] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" HandleID="k8s-pod-network.2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" Workload="localhost-k8s-calico--apiserver--79d7d69dbd--qwhnz-eth0" May 15 00:29:44.318756 containerd[1440]: 2025-05-15 00:29:44.305 [INFO][5478] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:29:44.318756 containerd[1440]: 2025-05-15 00:29:44.305 [INFO][5478] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:29:44.318756 containerd[1440]: 2025-05-15 00:29:44.313 [WARNING][5478] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" HandleID="k8s-pod-network.2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" Workload="localhost-k8s-calico--apiserver--79d7d69dbd--qwhnz-eth0" May 15 00:29:44.318756 containerd[1440]: 2025-05-15 00:29:44.313 [INFO][5478] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" HandleID="k8s-pod-network.2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" Workload="localhost-k8s-calico--apiserver--79d7d69dbd--qwhnz-eth0" May 15 00:29:44.318756 containerd[1440]: 2025-05-15 00:29:44.315 [INFO][5478] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:29:44.318756 containerd[1440]: 2025-05-15 00:29:44.317 [INFO][5469] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" May 15 00:29:44.319172 containerd[1440]: time="2025-05-15T00:29:44.318831802Z" level=info msg="TearDown network for sandbox \"2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2\" successfully" May 15 00:29:44.319172 containerd[1440]: time="2025-05-15T00:29:44.318857002Z" level=info msg="StopPodSandbox for \"2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2\" returns successfully" May 15 00:29:44.319365 containerd[1440]: time="2025-05-15T00:29:44.319333642Z" level=info msg="RemovePodSandbox for \"2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2\"" May 15 00:29:44.319424 containerd[1440]: time="2025-05-15T00:29:44.319371482Z" level=info msg="Forcibly stopping sandbox \"2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2\"" May 15 00:29:44.392640 containerd[1440]: 2025-05-15 00:29:44.357 [WARNING][5501] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79d7d69dbd--qwhnz-eth0", GenerateName:"calico-apiserver-79d7d69dbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"23f642bf-ba69-45cb-b737-8658c85d25d9", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 28, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d7d69dbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"22fd29bfb2fec9c0ae9e3a5ec38177a835c9887ee14fdea8d742d773670b5226", Pod:"calico-apiserver-79d7d69dbd-qwhnz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliee1a68071ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:29:44.392640 containerd[1440]: 2025-05-15 00:29:44.357 [INFO][5501] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" May 15 00:29:44.392640 containerd[1440]: 2025-05-15 00:29:44.357 [INFO][5501] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" iface="eth0" netns="" May 15 00:29:44.392640 containerd[1440]: 2025-05-15 00:29:44.357 [INFO][5501] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" May 15 00:29:44.392640 containerd[1440]: 2025-05-15 00:29:44.357 [INFO][5501] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" May 15 00:29:44.392640 containerd[1440]: 2025-05-15 00:29:44.376 [INFO][5510] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" HandleID="k8s-pod-network.2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" Workload="localhost-k8s-calico--apiserver--79d7d69dbd--qwhnz-eth0" May 15 00:29:44.392640 containerd[1440]: 2025-05-15 00:29:44.376 [INFO][5510] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:29:44.392640 containerd[1440]: 2025-05-15 00:29:44.376 [INFO][5510] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:29:44.392640 containerd[1440]: 2025-05-15 00:29:44.387 [WARNING][5510] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" HandleID="k8s-pod-network.2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" Workload="localhost-k8s-calico--apiserver--79d7d69dbd--qwhnz-eth0" May 15 00:29:44.392640 containerd[1440]: 2025-05-15 00:29:44.387 [INFO][5510] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" HandleID="k8s-pod-network.2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" Workload="localhost-k8s-calico--apiserver--79d7d69dbd--qwhnz-eth0" May 15 00:29:44.392640 containerd[1440]: 2025-05-15 00:29:44.389 [INFO][5510] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:29:44.392640 containerd[1440]: 2025-05-15 00:29:44.390 [INFO][5501] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2" May 15 00:29:44.393059 containerd[1440]: time="2025-05-15T00:29:44.392668013Z" level=info msg="TearDown network for sandbox \"2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2\" successfully" May 15 00:29:44.395522 containerd[1440]: time="2025-05-15T00:29:44.395488854Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:29:44.395755 containerd[1440]: time="2025-05-15T00:29:44.395549374Z" level=info msg="RemovePodSandbox \"2aba42420bada2240fc7f55217585d4e93d54d57b676ced42b890487d99fa5c2\" returns successfully" May 15 00:29:44.396034 containerd[1440]: time="2025-05-15T00:29:44.396010414Z" level=info msg="StopPodSandbox for \"b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c\"" May 15 00:29:44.467045 containerd[1440]: 2025-05-15 00:29:44.432 [WARNING][5533] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79d7d69dbd--j887d-eth0", GenerateName:"calico-apiserver-79d7d69dbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c672a4f-dc2d-49d4-92b2-aa1858123efa", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 28, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d7d69dbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a6a979fd28df260541480d26ffddeb4511d0b8381328161e4de9af84a33af52", Pod:"calico-apiserver-79d7d69dbd-j887d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali747cd19de0b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:29:44.467045 containerd[1440]: 2025-05-15 00:29:44.432 [INFO][5533] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" May 15 00:29:44.467045 containerd[1440]: 2025-05-15 00:29:44.432 [INFO][5533] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" iface="eth0" netns="" May 15 00:29:44.467045 containerd[1440]: 2025-05-15 00:29:44.432 [INFO][5533] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" May 15 00:29:44.467045 containerd[1440]: 2025-05-15 00:29:44.432 [INFO][5533] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" May 15 00:29:44.467045 containerd[1440]: 2025-05-15 00:29:44.453 [INFO][5542] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" HandleID="k8s-pod-network.b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" Workload="localhost-k8s-calico--apiserver--79d7d69dbd--j887d-eth0" May 15 00:29:44.467045 containerd[1440]: 2025-05-15 00:29:44.453 [INFO][5542] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:29:44.467045 containerd[1440]: 2025-05-15 00:29:44.453 [INFO][5542] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:29:44.467045 containerd[1440]: 2025-05-15 00:29:44.461 [WARNING][5542] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" HandleID="k8s-pod-network.b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" Workload="localhost-k8s-calico--apiserver--79d7d69dbd--j887d-eth0" May 15 00:29:44.467045 containerd[1440]: 2025-05-15 00:29:44.462 [INFO][5542] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" HandleID="k8s-pod-network.b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" Workload="localhost-k8s-calico--apiserver--79d7d69dbd--j887d-eth0" May 15 00:29:44.467045 containerd[1440]: 2025-05-15 00:29:44.463 [INFO][5542] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:29:44.467045 containerd[1440]: 2025-05-15 00:29:44.465 [INFO][5533] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" May 15 00:29:44.467045 containerd[1440]: time="2025-05-15T00:29:44.467023065Z" level=info msg="TearDown network for sandbox \"b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c\" successfully" May 15 00:29:44.468599 containerd[1440]: time="2025-05-15T00:29:44.467049545Z" level=info msg="StopPodSandbox for \"b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c\" returns successfully" May 15 00:29:44.468599 containerd[1440]: time="2025-05-15T00:29:44.467498945Z" level=info msg="RemovePodSandbox for \"b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c\"" May 15 00:29:44.468599 containerd[1440]: time="2025-05-15T00:29:44.467538265Z" level=info msg="Forcibly stopping sandbox \"b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c\"" May 15 00:29:44.538269 containerd[1440]: 2025-05-15 00:29:44.503 [WARNING][5565] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79d7d69dbd--j887d-eth0", GenerateName:"calico-apiserver-79d7d69dbd-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c672a4f-dc2d-49d4-92b2-aa1858123efa", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 28, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79d7d69dbd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8a6a979fd28df260541480d26ffddeb4511d0b8381328161e4de9af84a33af52", Pod:"calico-apiserver-79d7d69dbd-j887d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali747cd19de0b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:29:44.538269 containerd[1440]: 2025-05-15 00:29:44.503 [INFO][5565] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" May 15 00:29:44.538269 containerd[1440]: 2025-05-15 00:29:44.503 [INFO][5565] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" iface="eth0" netns="" May 15 00:29:44.538269 containerd[1440]: 2025-05-15 00:29:44.503 [INFO][5565] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" May 15 00:29:44.538269 containerd[1440]: 2025-05-15 00:29:44.503 [INFO][5565] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" May 15 00:29:44.538269 containerd[1440]: 2025-05-15 00:29:44.524 [INFO][5574] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" HandleID="k8s-pod-network.b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" Workload="localhost-k8s-calico--apiserver--79d7d69dbd--j887d-eth0" May 15 00:29:44.538269 containerd[1440]: 2025-05-15 00:29:44.524 [INFO][5574] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:29:44.538269 containerd[1440]: 2025-05-15 00:29:44.524 [INFO][5574] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:29:44.538269 containerd[1440]: 2025-05-15 00:29:44.532 [WARNING][5574] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" HandleID="k8s-pod-network.b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" Workload="localhost-k8s-calico--apiserver--79d7d69dbd--j887d-eth0" May 15 00:29:44.538269 containerd[1440]: 2025-05-15 00:29:44.532 [INFO][5574] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" HandleID="k8s-pod-network.b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" Workload="localhost-k8s-calico--apiserver--79d7d69dbd--j887d-eth0" May 15 00:29:44.538269 containerd[1440]: 2025-05-15 00:29:44.534 [INFO][5574] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:29:44.538269 containerd[1440]: 2025-05-15 00:29:44.536 [INFO][5565] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c" May 15 00:29:44.538660 containerd[1440]: time="2025-05-15T00:29:44.538294636Z" level=info msg="TearDown network for sandbox \"b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c\" successfully" May 15 00:29:44.547756 containerd[1440]: time="2025-05-15T00:29:44.547699837Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:29:44.548092 containerd[1440]: time="2025-05-15T00:29:44.547808477Z" level=info msg="RemovePodSandbox \"b8f1fb817a0577b1d770e11a89ef221174840a126710271ec21991aefc6f3d9c\" returns successfully" May 15 00:29:44.548660 containerd[1440]: time="2025-05-15T00:29:44.548434597Z" level=info msg="StopPodSandbox for \"a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635\"" May 15 00:29:44.613122 systemd[1]: Started sshd@18-10.0.0.112:22-10.0.0.1:57162.service - OpenSSH per-connection server daemon (10.0.0.1:57162). May 15 00:29:44.624833 containerd[1440]: 2025-05-15 00:29:44.588 [WARNING][5597] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--gtg4n-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"4b70c740-ad72-450d-bead-e209924ab516", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 28, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5adae3fc37731fe6ad0b1ab84926f72b3d082e054a0de033d900d312bbe8ce71", Pod:"coredns-6f6b679f8f-gtg4n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5a95f8aeb86", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:29:44.624833 containerd[1440]: 2025-05-15 00:29:44.588 [INFO][5597] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" May 15 00:29:44.624833 containerd[1440]: 2025-05-15 00:29:44.588 [INFO][5597] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" iface="eth0" netns="" May 15 00:29:44.624833 containerd[1440]: 2025-05-15 00:29:44.588 [INFO][5597] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" May 15 00:29:44.624833 containerd[1440]: 2025-05-15 00:29:44.588 [INFO][5597] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" May 15 00:29:44.624833 containerd[1440]: 2025-05-15 00:29:44.608 [INFO][5606] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" HandleID="k8s-pod-network.a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" Workload="localhost-k8s-coredns--6f6b679f8f--gtg4n-eth0" May 15 00:29:44.624833 containerd[1440]: 2025-05-15 00:29:44.608 [INFO][5606] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:29:44.624833 containerd[1440]: 2025-05-15 00:29:44.608 [INFO][5606] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:29:44.624833 containerd[1440]: 2025-05-15 00:29:44.619 [WARNING][5606] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" HandleID="k8s-pod-network.a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" Workload="localhost-k8s-coredns--6f6b679f8f--gtg4n-eth0" May 15 00:29:44.624833 containerd[1440]: 2025-05-15 00:29:44.619 [INFO][5606] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" HandleID="k8s-pod-network.a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" Workload="localhost-k8s-coredns--6f6b679f8f--gtg4n-eth0" May 15 00:29:44.624833 containerd[1440]: 2025-05-15 00:29:44.620 [INFO][5606] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:29:44.624833 containerd[1440]: 2025-05-15 00:29:44.622 [INFO][5597] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" May 15 00:29:44.624833 containerd[1440]: time="2025-05-15T00:29:44.624702889Z" level=info msg="TearDown network for sandbox \"a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635\" successfully" May 15 00:29:44.624833 containerd[1440]: time="2025-05-15T00:29:44.624731809Z" level=info msg="StopPodSandbox for \"a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635\" returns successfully" May 15 00:29:44.625254 containerd[1440]: time="2025-05-15T00:29:44.625196769Z" level=info msg="RemovePodSandbox for \"a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635\"" May 15 00:29:44.625254 containerd[1440]: time="2025-05-15T00:29:44.625227729Z" level=info msg="Forcibly stopping sandbox \"a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635\"" May 15 00:29:44.652061 sshd[5611]: Accepted publickey for core from 10.0.0.1 port 57162 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:29:44.653580 sshd[5611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:29:44.658206 systemd-logind[1420]: New session 19 of user core. May 15 00:29:44.665016 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 00:29:44.708514 containerd[1440]: 2025-05-15 00:29:44.673 [WARNING][5630] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--gtg4n-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"4b70c740-ad72-450d-bead-e209924ab516", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 28, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5adae3fc37731fe6ad0b1ab84926f72b3d082e054a0de033d900d312bbe8ce71", Pod:"coredns-6f6b679f8f-gtg4n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5a95f8aeb86", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:29:44.708514 containerd[1440]: 2025-05-15 00:29:44.673 [INFO][5630] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" May 15 00:29:44.708514 containerd[1440]: 2025-05-15 00:29:44.673 [INFO][5630] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" iface="eth0" netns="" May 15 00:29:44.708514 containerd[1440]: 2025-05-15 00:29:44.673 [INFO][5630] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" May 15 00:29:44.708514 containerd[1440]: 2025-05-15 00:29:44.673 [INFO][5630] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" May 15 00:29:44.708514 containerd[1440]: 2025-05-15 00:29:44.695 [INFO][5640] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" HandleID="k8s-pod-network.a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" Workload="localhost-k8s-coredns--6f6b679f8f--gtg4n-eth0" May 15 00:29:44.708514 containerd[1440]: 2025-05-15 00:29:44.695 [INFO][5640] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:29:44.708514 containerd[1440]: 2025-05-15 00:29:44.695 [INFO][5640] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:29:44.708514 containerd[1440]: 2025-05-15 00:29:44.703 [WARNING][5640] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" HandleID="k8s-pod-network.a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" Workload="localhost-k8s-coredns--6f6b679f8f--gtg4n-eth0" May 15 00:29:44.708514 containerd[1440]: 2025-05-15 00:29:44.703 [INFO][5640] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" HandleID="k8s-pod-network.a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" Workload="localhost-k8s-coredns--6f6b679f8f--gtg4n-eth0" May 15 00:29:44.708514 containerd[1440]: 2025-05-15 00:29:44.705 [INFO][5640] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:29:44.708514 containerd[1440]: 2025-05-15 00:29:44.707 [INFO][5630] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635" May 15 00:29:44.708928 containerd[1440]: time="2025-05-15T00:29:44.708550903Z" level=info msg="TearDown network for sandbox \"a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635\" successfully" May 15 00:29:44.711251 containerd[1440]: time="2025-05-15T00:29:44.711205903Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:29:44.711354 containerd[1440]: time="2025-05-15T00:29:44.711276103Z" level=info msg="RemovePodSandbox \"a29feb822a4959db0f57c5a7465ad3fec31a1347f261d3bdddac7f8ee9ed2635\" returns successfully" May 15 00:29:44.712008 containerd[1440]: time="2025-05-15T00:29:44.711981903Z" level=info msg="StopPodSandbox for \"b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0\"" May 15 00:29:44.798689 containerd[1440]: 2025-05-15 00:29:44.756 [WARNING][5670] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84874c75bf--xm54f-eth0", GenerateName:"calico-kube-controllers-84874c75bf-", Namespace:"calico-system", SelfLink:"", UID:"cdc568ed-985b-493b-94ee-815dbe88ed76", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 28, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84874c75bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea6bab460062e34d7a5da1b2a5fcc9f7543f45543c0d14e13b47032782a5ffb9", Pod:"calico-kube-controllers-84874c75bf-xm54f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2fc2c114722", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:29:44.798689 containerd[1440]: 2025-05-15 00:29:44.756 [INFO][5670] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" May 15 00:29:44.798689 containerd[1440]: 2025-05-15 00:29:44.756 [INFO][5670] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" iface="eth0" netns="" May 15 00:29:44.798689 containerd[1440]: 2025-05-15 00:29:44.756 [INFO][5670] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" May 15 00:29:44.798689 containerd[1440]: 2025-05-15 00:29:44.756 [INFO][5670] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" May 15 00:29:44.798689 containerd[1440]: 2025-05-15 00:29:44.785 [INFO][5679] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" HandleID="k8s-pod-network.b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" Workload="localhost-k8s-calico--kube--controllers--84874c75bf--xm54f-eth0" May 15 00:29:44.798689 containerd[1440]: 2025-05-15 00:29:44.785 [INFO][5679] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:29:44.798689 containerd[1440]: 2025-05-15 00:29:44.785 [INFO][5679] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:29:44.798689 containerd[1440]: 2025-05-15 00:29:44.793 [WARNING][5679] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" HandleID="k8s-pod-network.b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" Workload="localhost-k8s-calico--kube--controllers--84874c75bf--xm54f-eth0" May 15 00:29:44.798689 containerd[1440]: 2025-05-15 00:29:44.793 [INFO][5679] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" HandleID="k8s-pod-network.b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" Workload="localhost-k8s-calico--kube--controllers--84874c75bf--xm54f-eth0" May 15 00:29:44.798689 containerd[1440]: 2025-05-15 00:29:44.795 [INFO][5679] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:29:44.798689 containerd[1440]: 2025-05-15 00:29:44.797 [INFO][5670] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" May 15 00:29:44.798689 containerd[1440]: time="2025-05-15T00:29:44.798667997Z" level=info msg="TearDown network for sandbox \"b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0\" successfully" May 15 00:29:44.799145 containerd[1440]: time="2025-05-15T00:29:44.798693357Z" level=info msg="StopPodSandbox for \"b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0\" returns successfully" May 15 00:29:44.800704 containerd[1440]: time="2025-05-15T00:29:44.800638517Z" level=info msg="RemovePodSandbox for \"b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0\"" May 15 00:29:44.800704 containerd[1440]: time="2025-05-15T00:29:44.800675437Z" level=info msg="Forcibly stopping sandbox \"b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0\"" May 15 00:29:44.833736 sshd[5611]: pam_unix(sshd:session): session closed for user core May 15 00:29:44.838050 systemd-logind[1420]: Session 19 logged out. Waiting for processes to exit. May 15 00:29:44.838559 systemd[1]: sshd@18-10.0.0.112:22-10.0.0.1:57162.service: Deactivated successfully. May 15 00:29:44.841435 systemd[1]: session-19.scope: Deactivated successfully. May 15 00:29:44.842378 systemd-logind[1420]: Removed session 19. May 15 00:29:44.883381 containerd[1440]: 2025-05-15 00:29:44.849 [WARNING][5703] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84874c75bf--xm54f-eth0", GenerateName:"calico-kube-controllers-84874c75bf-", Namespace:"calico-system", SelfLink:"", UID:"cdc568ed-985b-493b-94ee-815dbe88ed76", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 28, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84874c75bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea6bab460062e34d7a5da1b2a5fcc9f7543f45543c0d14e13b47032782a5ffb9", Pod:"calico-kube-controllers-84874c75bf-xm54f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2fc2c114722", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:29:44.883381 containerd[1440]: 2025-05-15 00:29:44.849 [INFO][5703] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" May 15 00:29:44.883381 containerd[1440]: 2025-05-15 00:29:44.849 [INFO][5703] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" iface="eth0" netns="" May 15 00:29:44.883381 containerd[1440]: 2025-05-15 00:29:44.849 [INFO][5703] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" May 15 00:29:44.883381 containerd[1440]: 2025-05-15 00:29:44.849 [INFO][5703] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" May 15 00:29:44.883381 containerd[1440]: 2025-05-15 00:29:44.870 [INFO][5713] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" HandleID="k8s-pod-network.b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" Workload="localhost-k8s-calico--kube--controllers--84874c75bf--xm54f-eth0" May 15 00:29:44.883381 containerd[1440]: 2025-05-15 00:29:44.870 [INFO][5713] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:29:44.883381 containerd[1440]: 2025-05-15 00:29:44.870 [INFO][5713] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:29:44.883381 containerd[1440]: 2025-05-15 00:29:44.878 [WARNING][5713] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" HandleID="k8s-pod-network.b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" Workload="localhost-k8s-calico--kube--controllers--84874c75bf--xm54f-eth0" May 15 00:29:44.883381 containerd[1440]: 2025-05-15 00:29:44.878 [INFO][5713] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" HandleID="k8s-pod-network.b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" Workload="localhost-k8s-calico--kube--controllers--84874c75bf--xm54f-eth0" May 15 00:29:44.883381 containerd[1440]: 2025-05-15 00:29:44.880 [INFO][5713] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:29:44.883381 containerd[1440]: 2025-05-15 00:29:44.881 [INFO][5703] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0" May 15 00:29:44.883898 containerd[1440]: time="2025-05-15T00:29:44.883419570Z" level=info msg="TearDown network for sandbox \"b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0\" successfully" May 15 00:29:44.886108 containerd[1440]: time="2025-05-15T00:29:44.886070050Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 15 00:29:44.886192 containerd[1440]: time="2025-05-15T00:29:44.886131610Z" level=info msg="RemovePodSandbox \"b7e438ed91ce94d2f81b71d354eee84278578b086c21c47e1bb620eb7490c7b0\" returns successfully" May 15 00:29:49.846843 systemd[1]: Started sshd@19-10.0.0.112:22-10.0.0.1:57166.service - OpenSSH per-connection server daemon (10.0.0.1:57166). May 15 00:29:49.898720 sshd[5723]: Accepted publickey for core from 10.0.0.1 port 57166 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:29:49.900562 sshd[5723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:29:49.905201 systemd-logind[1420]: New session 20 of user core. May 15 00:29:49.911997 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 00:29:50.050890 sshd[5723]: pam_unix(sshd:session): session closed for user core May 15 00:29:50.054372 systemd[1]: sshd@19-10.0.0.112:22-10.0.0.1:57166.service: Deactivated successfully. May 15 00:29:50.056262 systemd[1]: session-20.scope: Deactivated successfully. May 15 00:29:50.056935 systemd-logind[1420]: Session 20 logged out. Waiting for processes to exit. May 15 00:29:50.058203 systemd-logind[1420]: Removed session 20.