Jul 7 06:08:16.903274 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 7 06:08:16.903295 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Sun Jul 6 22:28:26 -00 2025 Jul 7 06:08:16.903305 kernel: KASLR enabled Jul 7 06:08:16.903311 kernel: efi: EFI v2.7 by EDK II Jul 7 06:08:16.903317 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 7 06:08:16.903323 kernel: random: crng init done Jul 7 06:08:16.903330 kernel: ACPI: Early table checksum verification disabled Jul 7 06:08:16.903336 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 7 06:08:16.903342 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 7 06:08:16.903350 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:08:16.903356 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:08:16.903362 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:08:16.903369 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:08:16.903375 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:08:16.903383 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:08:16.903390 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:08:16.903397 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:08:16.903403 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:08:16.903410 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 7 06:08:16.903416 kernel: NUMA: Failed to initialise from firmware Jul 7 06:08:16.903423 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 7 06:08:16.903429 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jul 7 06:08:16.903436 kernel: Zone ranges: Jul 7 06:08:16.903442 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 7 06:08:16.903448 kernel: DMA32 empty Jul 7 06:08:16.903456 kernel: Normal empty Jul 7 06:08:16.903462 kernel: Movable zone start for each node Jul 7 06:08:16.903468 kernel: Early memory node ranges Jul 7 06:08:16.903475 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 7 06:08:16.903481 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 7 06:08:16.903487 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 7 06:08:16.903494 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 7 06:08:16.903500 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 7 06:08:16.903506 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 7 06:08:16.903513 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 7 06:08:16.903519 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 7 06:08:16.903526 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 7 06:08:16.903533 kernel: psci: probing for conduit method from ACPI. Jul 7 06:08:16.903540 kernel: psci: PSCIv1.1 detected in firmware. Jul 7 06:08:16.903546 kernel: psci: Using standard PSCI v0.2 function IDs Jul 7 06:08:16.903555 kernel: psci: Trusted OS migration not required Jul 7 06:08:16.903562 kernel: psci: SMC Calling Convention v1.1 Jul 7 06:08:16.903569 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 7 06:08:16.903577 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 7 06:08:16.903585 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 7 06:08:16.903591 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 7 06:08:16.903598 kernel: Detected PIPT I-cache on CPU0 Jul 7 06:08:16.903605 kernel: CPU features: detected: GIC system register CPU interface Jul 7 06:08:16.903612 kernel: CPU features: detected: Hardware dirty bit management Jul 7 06:08:16.903618 kernel: CPU features: detected: Spectre-v4 Jul 7 06:08:16.903625 kernel: CPU features: detected: Spectre-BHB Jul 7 06:08:16.903632 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 7 06:08:16.903638 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 7 06:08:16.903647 kernel: CPU features: detected: ARM erratum 1418040 Jul 7 06:08:16.903653 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 7 06:08:16.903660 kernel: alternatives: applying boot alternatives Jul 7 06:08:16.903668 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 06:08:16.903675 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 06:08:16.903682 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 06:08:16.903688 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 06:08:16.903695 kernel: Fallback order for Node 0: 0 Jul 7 06:08:16.903702 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 7 06:08:16.903708 kernel: Policy zone: DMA Jul 7 06:08:16.903715 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 06:08:16.903723 kernel: software IO TLB: area num 4. Jul 7 06:08:16.903730 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 7 06:08:16.903737 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) Jul 7 06:08:16.903744 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 7 06:08:16.903751 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 06:08:16.903758 kernel: rcu: RCU event tracing is enabled. Jul 7 06:08:16.903774 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 7 06:08:16.903781 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 06:08:16.903788 kernel: Tracing variant of Tasks RCU enabled. Jul 7 06:08:16.903795 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 06:08:16.903802 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 7 06:08:16.903808 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 7 06:08:16.903817 kernel: GICv3: 256 SPIs implemented Jul 7 06:08:16.903824 kernel: GICv3: 0 Extended SPIs implemented Jul 7 06:08:16.903830 kernel: Root IRQ handler: gic_handle_irq Jul 7 06:08:16.903837 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 7 06:08:16.903844 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 7 06:08:16.903851 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 7 06:08:16.903857 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 7 06:08:16.903864 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 7 06:08:16.903871 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 7 06:08:16.903878 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 7 06:08:16.903885 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 06:08:16.903893 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 06:08:16.903900 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 7 06:08:16.903907 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 7 06:08:16.903913 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 7 06:08:16.903920 kernel: arm-pv: using stolen time PV Jul 7 06:08:16.903927 kernel: Console: colour dummy device 80x25 Jul 7 06:08:16.903934 kernel: ACPI: Core revision 20230628 Jul 7 06:08:16.903941 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 7 06:08:16.903948 kernel: pid_max: default: 32768 minimum: 301 Jul 7 06:08:16.903955 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 7 06:08:16.904044 kernel: landlock: Up and running. Jul 7 06:08:16.904052 kernel: SELinux: Initializing. Jul 7 06:08:16.904058 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 06:08:16.904065 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 06:08:16.904073 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 06:08:16.904080 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 06:08:16.904087 kernel: rcu: Hierarchical SRCU implementation. Jul 7 06:08:16.904094 kernel: rcu: Max phase no-delay instances is 400. Jul 7 06:08:16.904101 kernel: Platform MSI: ITS@0x8080000 domain created Jul 7 06:08:16.904110 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 7 06:08:16.904117 kernel: Remapping and enabling EFI services. Jul 7 06:08:16.904124 kernel: smp: Bringing up secondary CPUs ... Jul 7 06:08:16.904131 kernel: Detected PIPT I-cache on CPU1 Jul 7 06:08:16.904138 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 7 06:08:16.904145 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 7 06:08:16.904152 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 06:08:16.904159 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 7 06:08:16.904166 kernel: Detected PIPT I-cache on CPU2 Jul 7 06:08:16.904173 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 7 06:08:16.904182 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 7 06:08:16.904189 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 06:08:16.904201 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 7 06:08:16.904210 kernel: Detected PIPT I-cache on CPU3 Jul 7 06:08:16.904217 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 7 06:08:16.904225 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 7 06:08:16.904232 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 06:08:16.904239 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 7 06:08:16.904247 kernel: smp: Brought up 1 node, 4 CPUs Jul 7 06:08:16.904255 kernel: SMP: Total of 4 processors activated. Jul 7 06:08:16.904263 kernel: CPU features: detected: 32-bit EL0 Support Jul 7 06:08:16.904270 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 7 06:08:16.904277 kernel: CPU features: detected: Common not Private translations Jul 7 06:08:16.904285 kernel: CPU features: detected: CRC32 instructions Jul 7 06:08:16.904292 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 7 06:08:16.904299 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 7 06:08:16.904307 kernel: CPU features: detected: LSE atomic instructions Jul 7 06:08:16.904316 kernel: CPU features: detected: Privileged Access Never Jul 7 06:08:16.904323 kernel: CPU features: detected: RAS Extension Support Jul 7 06:08:16.904330 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 7 06:08:16.904337 kernel: CPU: All CPU(s) started at EL1 Jul 7 06:08:16.904345 kernel: alternatives: applying system-wide alternatives Jul 7 06:08:16.904352 kernel: devtmpfs: initialized Jul 7 06:08:16.904359 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 06:08:16.904367 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 7 06:08:16.904374 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 06:08:16.904383 kernel: SMBIOS 3.0.0 present. Jul 7 06:08:16.904390 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 7 06:08:16.904398 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 06:08:16.904405 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 7 06:08:16.904413 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 7 06:08:16.904420 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 7 06:08:16.904428 kernel: audit: initializing netlink subsys (disabled) Jul 7 06:08:16.904435 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jul 7 06:08:16.904442 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 06:08:16.904451 kernel: cpuidle: using governor menu Jul 7 06:08:16.904458 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 7 06:08:16.904466 kernel: ASID allocator initialised with 32768 entries Jul 7 06:08:16.904473 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 06:08:16.904480 kernel: Serial: AMBA PL011 UART driver Jul 7 06:08:16.904487 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 7 06:08:16.904494 kernel: Modules: 0 pages in range for non-PLT usage Jul 7 06:08:16.904502 kernel: Modules: 509008 pages in range for PLT usage Jul 7 06:08:16.904509 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 06:08:16.904518 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 06:08:16.904525 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 7 06:08:16.904532 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 7 06:08:16.904540 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 06:08:16.904547 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 06:08:16.904554 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 7 06:08:16.904561 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 7 06:08:16.904568 kernel: ACPI: Added _OSI(Module Device) Jul 7 06:08:16.904576 kernel: ACPI: Added _OSI(Processor Device) Jul 7 06:08:16.904584 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 06:08:16.904592 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 06:08:16.904599 kernel: ACPI: Interpreter enabled Jul 7 06:08:16.904606 kernel: ACPI: Using GIC for interrupt routing Jul 7 06:08:16.904613 kernel: ACPI: MCFG table detected, 1 entries Jul 7 06:08:16.904621 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 7 06:08:16.904628 kernel: printk: console [ttyAMA0] enabled Jul 7 06:08:16.904635 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 06:08:16.904786 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 06:08:16.904866 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 7 06:08:16.904931 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 7 06:08:16.905007 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 7 06:08:16.905071 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 7 06:08:16.905081 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 7 06:08:16.905088 kernel: PCI host bridge to bus 0000:00 Jul 7 06:08:16.905157 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 7 06:08:16.905219 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 7 06:08:16.905276 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 7 06:08:16.905332 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 06:08:16.905411 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 7 06:08:16.905486 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 7 06:08:16.905555 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 7 06:08:16.905625 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 7 06:08:16.905692 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 7 06:08:16.905758 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 7 06:08:16.905840 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 7 06:08:16.905909 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 7 06:08:16.906013 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 7 06:08:16.906075 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 7 06:08:16.906136 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 7 06:08:16.906146 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 7 06:08:16.906153 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 7 06:08:16.906161 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 7 06:08:16.906169 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 7 06:08:16.906176 kernel: iommu: Default domain type: Translated Jul 7 06:08:16.906184 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 7 06:08:16.906191 kernel: efivars: Registered efivars operations Jul 7 06:08:16.906199 kernel: vgaarb: loaded Jul 7 06:08:16.906208 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 7 06:08:16.906216 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 06:08:16.906224 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 06:08:16.906231 kernel: pnp: PnP ACPI init Jul 7 06:08:16.906316 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 7 06:08:16.906331 kernel: pnp: PnP ACPI: found 1 devices Jul 7 06:08:16.906338 kernel: NET: Registered PF_INET protocol family Jul 7 06:08:16.906346 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 06:08:16.906360 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 06:08:16.906370 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 06:08:16.906378 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 06:08:16.906385 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 06:08:16.906393 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 06:08:16.906400 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 06:08:16.906408 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 06:08:16.906417 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 06:08:16.906425 kernel: PCI: CLS 0 bytes, default 64 Jul 7 06:08:16.906434 kernel: kvm [1]: HYP mode not available Jul 7 06:08:16.906442 kernel: Initialise system trusted keyrings Jul 7 06:08:16.906450 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 06:08:16.906460 kernel: Key type asymmetric registered Jul 7 06:08:16.906469 kernel: Asymmetric key parser 'x509' registered Jul 7 06:08:16.906479 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 06:08:16.906487 kernel: io scheduler mq-deadline registered Jul 7 06:08:16.906494 kernel: io scheduler kyber registered Jul 7 06:08:16.906501 kernel: io scheduler bfq registered Jul 7 06:08:16.906510 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 7 06:08:16.906518 kernel: ACPI: button: Power Button [PWRB] Jul 7 06:08:16.906525 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 7 06:08:16.906593 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 7 06:08:16.906603 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 06:08:16.906611 kernel: thunder_xcv, ver 1.0 Jul 7 06:08:16.906618 kernel: thunder_bgx, ver 1.0 Jul 7 06:08:16.906626 kernel: nicpf, ver 1.0 Jul 7 06:08:16.906633 kernel: nicvf, ver 1.0 Jul 7 06:08:16.906708 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 7 06:08:16.906780 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-07T06:08:16 UTC (1751868496) Jul 7 06:08:16.906792 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 06:08:16.906800 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 7 06:08:16.906807 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 7 06:08:16.906815 kernel: watchdog: Hard watchdog permanently disabled Jul 7 06:08:16.906822 kernel: NET: Registered PF_INET6 protocol family Jul 7 06:08:16.906829 kernel: Segment Routing with IPv6 Jul 7 06:08:16.906839 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 06:08:16.906846 kernel: NET: Registered PF_PACKET protocol family Jul 7 06:08:16.906853 kernel: Key type dns_resolver registered Jul 7 06:08:16.906861 kernel: registered taskstats version 1 Jul 7 06:08:16.906868 kernel: Loading compiled-in X.509 certificates Jul 7 06:08:16.906876 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 238b9dc1e5bb098e9decff566778e6505241ab94' Jul 7 06:08:16.906883 kernel: Key type .fscrypt registered Jul 7 06:08:16.906890 kernel: Key type fscrypt-provisioning registered Jul 7 06:08:16.906898 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 06:08:16.906907 kernel: ima: Allocated hash algorithm: sha1 Jul 7 06:08:16.906914 kernel: ima: No architecture policies found Jul 7 06:08:16.906921 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 7 06:08:16.906929 kernel: clk: Disabling unused clocks Jul 7 06:08:16.906936 kernel: Freeing unused kernel memory: 39424K Jul 7 06:08:16.906943 kernel: Run /init as init process Jul 7 06:08:16.906950 kernel: with arguments: Jul 7 06:08:16.906958 kernel: /init Jul 7 06:08:16.906975 kernel: with environment: Jul 7 06:08:16.906985 kernel: HOME=/ Jul 7 06:08:16.906992 kernel: TERM=linux Jul 7 06:08:16.906999 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 06:08:16.907008 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 06:08:16.907018 systemd[1]: Detected virtualization kvm. Jul 7 06:08:16.907026 systemd[1]: Detected architecture arm64. Jul 7 06:08:16.907034 systemd[1]: Running in initrd. Jul 7 06:08:16.907041 systemd[1]: No hostname configured, using default hostname. Jul 7 06:08:16.907051 systemd[1]: Hostname set to . Jul 7 06:08:16.907059 systemd[1]: Initializing machine ID from VM UUID. Jul 7 06:08:16.907067 systemd[1]: Queued start job for default target initrd.target. Jul 7 06:08:16.907075 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:08:16.907083 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:08:16.907091 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 06:08:16.907100 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:08:16.907109 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 06:08:16.907118 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 06:08:16.907127 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 06:08:16.907135 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 06:08:16.907143 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:08:16.907151 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:08:16.907159 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:08:16.907168 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:08:16.907176 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:08:16.907184 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:08:16.907192 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:08:16.907200 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:08:16.907208 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 06:08:16.907216 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 06:08:16.907224 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:08:16.907232 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:08:16.907242 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:08:16.907250 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:08:16.907258 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 06:08:16.907266 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:08:16.907274 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 06:08:16.907282 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 06:08:16.907290 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:08:16.907298 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:08:16.907308 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:08:16.907316 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 06:08:16.907323 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:08:16.907331 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 06:08:16.907340 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:08:16.907350 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:08:16.907358 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:08:16.907386 systemd-journald[238]: Collecting audit messages is disabled. Jul 7 06:08:16.907405 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:08:16.907416 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 06:08:16.907423 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:08:16.907432 systemd-journald[238]: Journal started Jul 7 06:08:16.907451 systemd-journald[238]: Runtime Journal (/run/log/journal/c4161e7fbdaf4d32bc608e4b75bb25b0) is 5.9M, max 47.3M, 41.4M free. Jul 7 06:08:16.891751 systemd-modules-load[239]: Inserted module 'overlay' Jul 7 06:08:16.910066 kernel: Bridge firewalling registered Jul 7 06:08:16.909922 systemd-modules-load[239]: Inserted module 'br_netfilter' Jul 7 06:08:16.912574 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:08:16.913037 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:08:16.916639 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:08:16.919007 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:08:16.921902 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:08:16.925323 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:08:16.932137 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 06:08:16.933184 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:08:16.934560 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:08:16.937693 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:08:16.942392 dracut-cmdline[272]: dracut-dracut-053 Jul 7 06:08:16.945448 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 06:08:16.969112 systemd-resolved[281]: Positive Trust Anchors: Jul 7 06:08:16.969129 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:08:16.969160 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:08:16.973917 systemd-resolved[281]: Defaulting to hostname 'linux'. Jul 7 06:08:16.977266 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:08:16.978116 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:08:17.015994 kernel: SCSI subsystem initialized Jul 7 06:08:17.020980 kernel: Loading iSCSI transport class v2.0-870. Jul 7 06:08:17.029007 kernel: iscsi: registered transport (tcp) Jul 7 06:08:17.041030 kernel: iscsi: registered transport (qla4xxx) Jul 7 06:08:17.041059 kernel: QLogic iSCSI HBA Driver Jul 7 06:08:17.084415 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 06:08:17.096111 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 06:08:17.113972 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 06:08:17.114021 kernel: device-mapper: uevent: version 1.0.3 Jul 7 06:08:17.115186 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 7 06:08:17.160991 kernel: raid6: neonx8 gen() 15682 MB/s Jul 7 06:08:17.177983 kernel: raid6: neonx4 gen() 15609 MB/s Jul 7 06:08:17.194981 kernel: raid6: neonx2 gen() 13211 MB/s Jul 7 06:08:17.211986 kernel: raid6: neonx1 gen() 10485 MB/s Jul 7 06:08:17.228977 kernel: raid6: int64x8 gen() 6959 MB/s Jul 7 06:08:17.245985 kernel: raid6: int64x4 gen() 7353 MB/s Jul 7 06:08:17.262986 kernel: raid6: int64x2 gen() 6114 MB/s Jul 7 06:08:17.279979 kernel: raid6: int64x1 gen() 5034 MB/s Jul 7 06:08:17.279997 kernel: raid6: using algorithm neonx8 gen() 15682 MB/s Jul 7 06:08:17.296984 kernel: raid6: .... xor() 11904 MB/s, rmw enabled Jul 7 06:08:17.296997 kernel: raid6: using neon recovery algorithm Jul 7 06:08:17.301978 kernel: xor: measuring software checksum speed Jul 7 06:08:17.302975 kernel: 8regs : 18066 MB/sec Jul 7 06:08:17.302988 kernel: 32regs : 19660 MB/sec Jul 7 06:08:17.303978 kernel: arm64_neon : 24983 MB/sec Jul 7 06:08:17.303990 kernel: xor: using function: arm64_neon (24983 MB/sec) Jul 7 06:08:17.361985 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 06:08:17.374031 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:08:17.382142 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:08:17.394160 systemd-udevd[459]: Using default interface naming scheme 'v255'. Jul 7 06:08:17.397356 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:08:17.405164 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 06:08:17.416457 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Jul 7 06:08:17.444351 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:08:17.459123 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:08:17.499566 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:08:17.511139 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 06:08:17.525660 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 06:08:17.527414 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:08:17.528351 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:08:17.530560 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:08:17.536135 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 06:08:17.538099 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 7 06:08:17.548851 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 7 06:08:17.551140 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 06:08:17.551264 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:08:17.556191 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 06:08:17.556216 kernel: GPT:9289727 != 19775487 Jul 7 06:08:17.556227 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 06:08:17.556237 kernel: GPT:9289727 != 19775487 Jul 7 06:08:17.556245 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 06:08:17.556265 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:08:17.557107 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:08:17.558015 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:08:17.558328 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:08:17.565116 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:08:17.575521 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (518) Jul 7 06:08:17.577998 kernel: BTRFS: device fsid 8b9ce65a-b4d6-4744-987c-133e7f159d2d devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (517) Jul 7 06:08:17.578263 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:08:17.580221 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:08:17.589079 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:08:17.600134 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 7 06:08:17.604426 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 7 06:08:17.608694 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 06:08:17.612347 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 7 06:08:17.613206 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 7 06:08:17.627172 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 06:08:17.628743 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:08:17.648618 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:08:17.661692 disk-uuid[552]: Primary Header is updated. Jul 7 06:08:17.661692 disk-uuid[552]: Secondary Entries is updated. Jul 7 06:08:17.661692 disk-uuid[552]: Secondary Header is updated. Jul 7 06:08:17.666989 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:08:18.690999 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:08:18.691424 disk-uuid[561]: The operation has completed successfully. Jul 7 06:08:18.710898 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 06:08:18.711006 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 06:08:18.732167 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 06:08:18.735123 sh[575]: Success Jul 7 06:08:18.747022 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 7 06:08:18.776290 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 06:08:18.790527 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 06:08:18.792590 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 06:08:18.801994 kernel: BTRFS info (device dm-0): first mount of filesystem 8b9ce65a-b4d6-4744-987c-133e7f159d2d Jul 7 06:08:18.802032 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 7 06:08:18.803417 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 7 06:08:18.803443 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 7 06:08:18.804405 kernel: BTRFS info (device dm-0): using free space tree Jul 7 06:08:18.808398 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 06:08:18.809576 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 06:08:18.826156 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 06:08:18.827743 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 06:08:18.836546 kernel: BTRFS info (device vda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:08:18.836585 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 06:08:18.836596 kernel: BTRFS info (device vda6): using free space tree Jul 7 06:08:18.839000 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 06:08:18.846383 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 7 06:08:18.847980 kernel: BTRFS info (device vda6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:08:18.917931 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 06:08:18.925180 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 06:08:18.926614 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:08:18.930461 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:08:18.969544 systemd-networkd[757]: lo: Link UP Jul 7 06:08:18.969555 systemd-networkd[757]: lo: Gained carrier Jul 7 06:08:18.971214 systemd-networkd[757]: Enumeration completed Jul 7 06:08:18.971373 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:08:18.972287 systemd[1]: Reached target network.target - Network. Jul 7 06:08:18.973330 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:08:18.973333 systemd-networkd[757]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:08:18.974266 systemd-networkd[757]: eth0: Link UP Jul 7 06:08:18.974269 systemd-networkd[757]: eth0: Gained carrier Jul 7 06:08:18.974277 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:08:18.998033 systemd-networkd[757]: eth0: DHCPv4 address 10.0.0.114/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 06:08:19.039762 ignition[752]: Ignition 2.19.0 Jul 7 06:08:19.039772 ignition[752]: Stage: fetch-offline Jul 7 06:08:19.039812 ignition[752]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:08:19.039822 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:08:19.039995 ignition[752]: parsed url from cmdline: "" Jul 7 06:08:19.039999 ignition[752]: no config URL provided Jul 7 06:08:19.040003 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 06:08:19.040011 ignition[752]: no config at "/usr/lib/ignition/user.ign" Jul 7 06:08:19.040036 ignition[752]: op(1): [started] loading QEMU firmware config module Jul 7 06:08:19.040045 ignition[752]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 7 06:08:19.051489 ignition[752]: op(1): [finished] loading QEMU firmware config module Jul 7 06:08:19.090604 ignition[752]: parsing config with SHA512: e1df7ecc5d8f9374e4797abac22a43803ab567d42bc767a3fdace11fc98e701e3f946b07c382e757c236db6ee8529d855d183b20b08db5a294c7d7953cbb7748 Jul 7 06:08:19.096837 unknown[752]: fetched base config from "system" Jul 7 06:08:19.096847 unknown[752]: fetched user config from "qemu" Jul 7 06:08:19.097296 ignition[752]: fetch-offline: fetch-offline passed Jul 7 06:08:19.097361 ignition[752]: Ignition finished successfully Jul 7 06:08:19.100262 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:08:19.101278 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 7 06:08:19.105132 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 06:08:19.117304 ignition[771]: Ignition 2.19.0 Jul 7 06:08:19.117315 ignition[771]: Stage: kargs Jul 7 06:08:19.117495 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:08:19.117505 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:08:19.118372 ignition[771]: kargs: kargs passed Jul 7 06:08:19.118423 ignition[771]: Ignition finished successfully Jul 7 06:08:19.120447 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 06:08:19.129679 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 06:08:19.141796 ignition[779]: Ignition 2.19.0 Jul 7 06:08:19.141807 ignition[779]: Stage: disks Jul 7 06:08:19.142004 ignition[779]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:08:19.142014 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:08:19.142907 ignition[779]: disks: disks passed Jul 7 06:08:19.144458 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 06:08:19.142979 ignition[779]: Ignition finished successfully Jul 7 06:08:19.145774 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 06:08:19.146885 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 06:08:19.148169 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:08:19.149526 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:08:19.150993 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:08:19.165158 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 06:08:19.177296 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 7 06:08:19.181233 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 06:08:19.193135 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 06:08:19.235974 kernel: EXT4-fs (vda9): mounted filesystem bea371b7-1069-4e98-84b2-bf5b94f934f3 r/w with ordered data mode. Quota mode: none. Jul 7 06:08:19.236395 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 06:08:19.237542 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 06:08:19.257104 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:08:19.258770 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 06:08:19.259875 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 06:08:19.259951 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 06:08:19.260045 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:08:19.266623 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (798) Jul 7 06:08:19.266028 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 06:08:19.270369 kernel: BTRFS info (device vda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:08:19.270389 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 06:08:19.270400 kernel: BTRFS info (device vda6): using free space tree Jul 7 06:08:19.268782 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 06:08:19.272979 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 06:08:19.274890 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:08:19.318370 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 06:08:19.323573 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Jul 7 06:08:19.327700 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 06:08:19.332579 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 06:08:19.415783 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 06:08:19.430081 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 06:08:19.433482 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 06:08:19.435986 kernel: BTRFS info (device vda6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:08:19.457297 ignition[913]: INFO : Ignition 2.19.0 Jul 7 06:08:19.457297 ignition[913]: INFO : Stage: mount Jul 7 06:08:19.458784 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:08:19.458784 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:08:19.458784 ignition[913]: INFO : mount: mount passed Jul 7 06:08:19.458784 ignition[913]: INFO : Ignition finished successfully Jul 7 06:08:19.460409 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 06:08:19.466080 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 06:08:19.467033 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 06:08:19.802052 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 06:08:19.819184 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:08:19.825284 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (927) Jul 7 06:08:19.825321 kernel: BTRFS info (device vda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:08:19.825333 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 06:08:19.826402 kernel: BTRFS info (device vda6): using free space tree Jul 7 06:08:19.828993 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 06:08:19.829530 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:08:19.846062 ignition[944]: INFO : Ignition 2.19.0 Jul 7 06:08:19.846062 ignition[944]: INFO : Stage: files Jul 7 06:08:19.847377 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:08:19.847377 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:08:19.847377 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Jul 7 06:08:19.850002 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 06:08:19.850002 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 06:08:19.855678 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 06:08:19.856729 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 06:08:19.856729 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 06:08:19.856234 unknown[944]: wrote ssh authorized keys file for user: core Jul 7 06:08:19.859787 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 7 06:08:19.859787 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 7 06:08:20.048248 systemd-networkd[757]: eth0: Gained IPv6LL Jul 7 06:08:20.073078 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 06:08:20.191336 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 7 06:08:20.192791 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 06:08:20.192791 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 06:08:20.192791 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:08:20.192791 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:08:20.192791 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:08:20.192791 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:08:20.192791 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:08:20.192791 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:08:20.203210 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:08:20.203210 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:08:20.203210 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 7 06:08:20.203210 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 7 06:08:20.203210 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 7 06:08:20.203210 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 7 06:08:20.702476 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 06:08:20.961304 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 7 06:08:20.961304 ignition[944]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 06:08:20.964341 ignition[944]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:08:20.966264 ignition[944]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:08:20.966264 ignition[944]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 06:08:20.966264 ignition[944]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 7 06:08:20.966264 ignition[944]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 06:08:20.966264 ignition[944]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 06:08:20.966264 ignition[944]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 7 06:08:20.966264 ignition[944]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 7 06:08:21.009651 ignition[944]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 06:08:21.015644 ignition[944]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 06:08:21.018007 ignition[944]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 7 06:08:21.018007 ignition[944]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 7 06:08:21.018007 ignition[944]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 06:08:21.018007 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:08:21.018007 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:08:21.018007 ignition[944]: INFO : files: files passed Jul 7 06:08:21.018007 ignition[944]: INFO : Ignition finished successfully Jul 7 06:08:21.021168 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 06:08:21.029205 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 06:08:21.032891 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 06:08:21.036589 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 06:08:21.036694 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 06:08:21.040134 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Jul 7 06:08:21.042631 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:08:21.042631 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:08:21.045092 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:08:21.046472 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:08:21.047896 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 06:08:21.058194 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 06:08:21.080856 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 06:08:21.080974 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 06:08:21.082679 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 06:08:21.083923 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 06:08:21.085347 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 06:08:21.087104 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 06:08:21.103018 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:08:21.105469 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 06:08:21.117662 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:08:21.119346 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:08:21.120259 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 06:08:21.121628 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 06:08:21.121765 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:08:21.123604 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 06:08:21.125034 systemd[1]: Stopped target basic.target - Basic System. Jul 7 06:08:21.126240 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 06:08:21.127768 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:08:21.129445 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 06:08:21.131105 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 06:08:21.132537 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:08:21.133971 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 06:08:21.135438 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 06:08:21.136744 systemd[1]: Stopped target swap.target - Swaps. Jul 7 06:08:21.137914 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 06:08:21.138064 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:08:21.139818 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:08:21.141538 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:08:21.143037 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 06:08:21.144048 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:08:21.145272 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 06:08:21.145391 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 06:08:21.147393 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 06:08:21.147510 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:08:21.149192 systemd[1]: Stopped target paths.target - Path Units. Jul 7 06:08:21.150545 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 06:08:21.154022 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:08:21.155137 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 06:08:21.156847 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 06:08:21.158233 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 06:08:21.158321 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:08:21.159636 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 06:08:21.159714 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:08:21.161025 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 06:08:21.161138 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:08:21.162527 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 06:08:21.162624 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 06:08:21.178242 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 06:08:21.180448 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 06:08:21.181241 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 06:08:21.181365 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:08:21.182811 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 06:08:21.182914 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:08:21.188900 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 06:08:21.189026 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 06:08:21.193993 ignition[1000]: INFO : Ignition 2.19.0 Jul 7 06:08:21.193993 ignition[1000]: INFO : Stage: umount Jul 7 06:08:21.193993 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:08:21.193993 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:08:21.197155 ignition[1000]: INFO : umount: umount passed Jul 7 06:08:21.197155 ignition[1000]: INFO : Ignition finished successfully Jul 7 06:08:21.196635 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 06:08:21.197127 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 06:08:21.197219 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 06:08:21.198770 systemd[1]: Stopped target network.target - Network. Jul 7 06:08:21.200101 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 06:08:21.200161 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 06:08:21.201316 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 06:08:21.201354 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 06:08:21.202887 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 06:08:21.202932 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 06:08:21.204097 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 06:08:21.204136 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 06:08:21.205461 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 06:08:21.206762 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 06:08:21.212191 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 06:08:21.213191 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 06:08:21.214997 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 06:08:21.215093 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:08:21.219448 systemd-networkd[757]: eth0: DHCPv6 lease lost Jul 7 06:08:21.222895 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 06:08:21.223079 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 06:08:21.224777 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 06:08:21.224818 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:08:21.235107 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 06:08:21.235775 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 06:08:21.235836 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:08:21.237474 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 06:08:21.237520 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:08:21.238795 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 06:08:21.238832 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 06:08:21.240634 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:08:21.248721 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 06:08:21.248884 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 06:08:21.250648 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 06:08:21.250741 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 06:08:21.252400 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 06:08:21.252513 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 06:08:21.261704 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 06:08:21.261890 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:08:21.263780 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 06:08:21.263822 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 06:08:21.264955 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 06:08:21.265002 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:08:21.266301 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 06:08:21.266348 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:08:21.268640 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 06:08:21.268685 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 06:08:21.270627 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 06:08:21.270670 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:08:21.282137 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 06:08:21.283107 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 06:08:21.283172 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:08:21.284736 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 7 06:08:21.284792 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:08:21.286242 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 06:08:21.286281 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:08:21.287825 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:08:21.287864 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:08:21.289876 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 06:08:21.290999 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 06:08:21.292487 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 06:08:21.294300 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 06:08:21.304930 systemd[1]: Switching root. Jul 7 06:08:21.333005 systemd-journald[238]: Journal stopped Jul 7 06:08:22.120671 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jul 7 06:08:22.120728 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 06:08:22.120745 kernel: SELinux: policy capability open_perms=1 Jul 7 06:08:22.120769 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 06:08:22.120779 kernel: SELinux: policy capability always_check_network=0 Jul 7 06:08:22.120789 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 06:08:22.120799 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 06:08:22.120809 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 06:08:22.120819 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 06:08:22.120829 kernel: audit: type=1403 audit(1751868501.547:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 06:08:22.120840 systemd[1]: Successfully loaded SELinux policy in 32.713ms. Jul 7 06:08:22.120861 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.202ms. Jul 7 06:08:22.120874 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 06:08:22.120885 systemd[1]: Detected virtualization kvm. Jul 7 06:08:22.120896 systemd[1]: Detected architecture arm64. Jul 7 06:08:22.120907 systemd[1]: Detected first boot. Jul 7 06:08:22.120917 systemd[1]: Initializing machine ID from VM UUID. Jul 7 06:08:22.120928 zram_generator::config[1045]: No configuration found. Jul 7 06:08:22.120940 systemd[1]: Populated /etc with preset unit settings. Jul 7 06:08:22.120950 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 06:08:22.120973 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 06:08:22.120986 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 06:08:22.120999 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 06:08:22.121010 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 06:08:22.121025 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 06:08:22.121036 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 06:08:22.121047 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 06:08:22.121058 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 06:08:22.121072 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 06:08:22.121082 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 06:08:22.121093 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:08:22.121104 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:08:22.121115 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 06:08:22.121127 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 06:08:22.121138 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 06:08:22.121149 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:08:22.121160 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 7 06:08:22.121172 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:08:22.121183 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 06:08:22.121197 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 06:08:22.121208 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 06:08:22.121219 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 06:08:22.121230 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:08:22.121243 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:08:22.121253 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:08:22.121266 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:08:22.121276 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 06:08:22.121287 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 06:08:22.121297 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:08:22.121308 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:08:22.121319 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:08:22.121330 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 06:08:22.121340 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 06:08:22.121351 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 06:08:22.121363 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 06:08:22.121374 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 06:08:22.121384 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 06:08:22.121395 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 06:08:22.121407 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 06:08:22.121417 systemd[1]: Reached target machines.target - Containers. Jul 7 06:08:22.121428 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 06:08:22.121439 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:08:22.121450 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:08:22.121464 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 06:08:22.121475 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:08:22.121486 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:08:22.121497 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:08:22.121507 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 06:08:22.121517 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:08:22.121528 kernel: fuse: init (API version 7.39) Jul 7 06:08:22.121538 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 06:08:22.121550 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 06:08:22.121560 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 06:08:22.121571 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 06:08:22.121581 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 06:08:22.121591 kernel: ACPI: bus type drm_connector registered Jul 7 06:08:22.121601 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:08:22.121611 kernel: loop: module loaded Jul 7 06:08:22.121621 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:08:22.121632 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:08:22.121645 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 06:08:22.121655 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:08:22.121666 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 06:08:22.121677 systemd[1]: Stopped verity-setup.service. Jul 7 06:08:22.121704 systemd-journald[1109]: Collecting audit messages is disabled. Jul 7 06:08:22.121726 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 06:08:22.121738 systemd-journald[1109]: Journal started Jul 7 06:08:22.121766 systemd-journald[1109]: Runtime Journal (/run/log/journal/c4161e7fbdaf4d32bc608e4b75bb25b0) is 5.9M, max 47.3M, 41.4M free. Jul 7 06:08:21.934723 systemd[1]: Queued start job for default target multi-user.target. Jul 7 06:08:21.951899 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 7 06:08:21.952283 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 06:08:22.123729 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:08:22.125462 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 06:08:22.126488 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 06:08:22.127318 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 06:08:22.128205 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 06:08:22.129161 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 06:08:22.130127 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:08:22.131367 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 06:08:22.131522 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 06:08:22.132706 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:08:22.132855 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:08:22.133995 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:08:22.134149 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:08:22.135143 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:08:22.135279 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:08:22.136517 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 06:08:22.136662 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 06:08:22.137938 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:08:22.138120 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:08:22.139164 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:08:22.140315 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:08:22.141458 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 06:08:22.153852 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:08:22.165065 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 06:08:22.166982 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 06:08:22.167805 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 06:08:22.167839 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:08:22.169566 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 7 06:08:22.171575 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 06:08:22.173541 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 06:08:22.174473 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:08:22.175854 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 06:08:22.180180 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 06:08:22.181114 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:08:22.182757 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 06:08:22.183723 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:08:22.186366 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:08:22.190117 systemd-journald[1109]: Time spent on flushing to /var/log/journal/c4161e7fbdaf4d32bc608e4b75bb25b0 is 14.320ms for 852 entries. Jul 7 06:08:22.190117 systemd-journald[1109]: System Journal (/var/log/journal/c4161e7fbdaf4d32bc608e4b75bb25b0) is 8.0M, max 195.6M, 187.6M free. Jul 7 06:08:22.212605 systemd-journald[1109]: Received client request to flush runtime journal. Jul 7 06:08:22.192059 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 06:08:22.195334 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:08:22.197852 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 06:08:22.199057 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:08:22.200209 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 06:08:22.201293 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 06:08:22.202502 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 06:08:22.203771 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 06:08:22.211513 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 06:08:22.221184 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 7 06:08:22.221990 kernel: loop0: detected capacity change from 0 to 207008 Jul 7 06:08:22.225514 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 7 06:08:22.227013 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 06:08:22.229353 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:08:22.242308 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 06:08:22.247618 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 06:08:22.249306 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 7 06:08:22.253905 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 7 06:08:22.266545 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Jul 7 06:08:22.266567 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Jul 7 06:08:22.272550 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:08:22.273037 kernel: loop1: detected capacity change from 0 to 114328 Jul 7 06:08:22.280219 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 06:08:22.297986 kernel: loop2: detected capacity change from 0 to 114432 Jul 7 06:08:22.306548 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 06:08:22.316359 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:08:22.325984 kernel: loop3: detected capacity change from 0 to 207008 Jul 7 06:08:22.328933 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Jul 7 06:08:22.328953 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Jul 7 06:08:22.334236 kernel: loop4: detected capacity change from 0 to 114328 Jul 7 06:08:22.333183 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:08:22.340022 kernel: loop5: detected capacity change from 0 to 114432 Jul 7 06:08:22.342639 (sd-merge)[1184]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 7 06:08:22.343470 (sd-merge)[1184]: Merged extensions into '/usr'. Jul 7 06:08:22.347612 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 06:08:22.347629 systemd[1]: Reloading... Jul 7 06:08:22.384985 zram_generator::config[1211]: No configuration found. Jul 7 06:08:22.467114 ldconfig[1150]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 06:08:22.493825 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:08:22.529619 systemd[1]: Reloading finished in 181 ms. Jul 7 06:08:22.557461 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 06:08:22.559098 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 06:08:22.572136 systemd[1]: Starting ensure-sysext.service... Jul 7 06:08:22.574019 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:08:22.584218 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... Jul 7 06:08:22.584236 systemd[1]: Reloading... Jul 7 06:08:22.597292 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 06:08:22.597546 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 06:08:22.598216 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 06:08:22.598425 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jul 7 06:08:22.598482 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jul 7 06:08:22.600682 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:08:22.600696 systemd-tmpfiles[1247]: Skipping /boot Jul 7 06:08:22.607877 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:08:22.607893 systemd-tmpfiles[1247]: Skipping /boot Jul 7 06:08:22.636179 zram_generator::config[1274]: No configuration found. Jul 7 06:08:22.717858 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:08:22.753871 systemd[1]: Reloading finished in 169 ms. Jul 7 06:08:22.773356 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 06:08:22.787345 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:08:22.794349 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 06:08:22.796738 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 06:08:22.798898 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 06:08:22.803146 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:08:22.809890 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:08:22.812937 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 06:08:22.817334 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:08:22.818497 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:08:22.821323 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:08:22.824303 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:08:22.827883 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:08:22.828636 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:08:22.828777 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:08:22.830309 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:08:22.830425 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:08:22.833801 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:08:22.833932 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:08:22.835264 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 06:08:22.839556 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:08:22.839788 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:08:22.848271 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 06:08:22.850068 systemd-udevd[1319]: Using default interface naming scheme 'v255'. Jul 7 06:08:22.851329 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 06:08:22.855573 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 06:08:22.859460 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 06:08:22.862600 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:08:22.865312 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:08:22.873064 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:08:22.875509 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:08:22.876949 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:08:22.877100 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 06:08:22.877702 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:08:22.879148 augenrules[1344]: No rules Jul 7 06:08:22.881047 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 06:08:22.882697 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:08:22.883993 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:08:22.885208 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:08:22.885329 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:08:22.888274 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 06:08:22.889558 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:08:22.889678 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:08:22.895169 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 06:08:22.918254 systemd[1]: Finished ensure-sysext.service. Jul 7 06:08:22.923033 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1364) Jul 7 06:08:22.926587 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 7 06:08:22.932438 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:08:22.942193 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:08:22.948199 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:08:22.950238 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:08:22.953333 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:08:22.955142 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:08:22.959941 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:08:22.965167 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 06:08:22.965955 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 06:08:22.966439 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:08:22.967930 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:08:22.971108 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:08:22.971263 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:08:22.975269 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:08:22.975411 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:08:22.976629 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:08:22.976772 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:08:22.986652 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 06:08:22.994164 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 06:08:22.995133 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:08:22.995185 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:08:22.995881 systemd-resolved[1315]: Positive Trust Anchors: Jul 7 06:08:22.995893 systemd-resolved[1315]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:08:22.995923 systemd-resolved[1315]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:08:23.004140 systemd-resolved[1315]: Defaulting to hostname 'linux'. Jul 7 06:08:23.007903 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:08:23.008869 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:08:23.019647 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 06:08:23.035270 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:08:23.044311 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 06:08:23.046482 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 06:08:23.050902 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 7 06:08:23.056799 systemd-networkd[1386]: lo: Link UP Jul 7 06:08:23.056812 systemd-networkd[1386]: lo: Gained carrier Jul 7 06:08:23.057598 systemd-networkd[1386]: Enumeration completed Jul 7 06:08:23.059153 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 7 06:08:23.060101 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:08:23.061435 systemd[1]: Reached target network.target - Network. Jul 7 06:08:23.063246 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 06:08:23.064888 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:08:23.064903 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:08:23.066086 systemd-networkd[1386]: eth0: Link UP Jul 7 06:08:23.066095 systemd-networkd[1386]: eth0: Gained carrier Jul 7 06:08:23.066110 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:08:23.077335 lvm[1407]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 06:08:23.088855 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:08:23.090030 systemd-networkd[1386]: eth0: DHCPv4 address 10.0.0.114/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 06:08:23.090606 systemd-timesyncd[1387]: Network configuration changed, trying to establish connection. Jul 7 06:08:23.091235 systemd-timesyncd[1387]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 7 06:08:23.091295 systemd-timesyncd[1387]: Initial clock synchronization to Mon 2025-07-07 06:08:23.439915 UTC. Jul 7 06:08:23.101457 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 7 06:08:23.102591 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:08:23.104133 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:08:23.105002 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 06:08:23.105895 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 06:08:23.107039 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 06:08:23.107941 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 06:08:23.108853 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 06:08:23.109805 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 06:08:23.109841 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:08:23.110508 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:08:23.111779 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 06:08:23.114099 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 06:08:23.123139 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 06:08:23.125120 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 7 06:08:23.126392 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 06:08:23.127305 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:08:23.128035 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:08:23.128769 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:08:23.128801 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:08:23.129777 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 06:08:23.131628 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 06:08:23.133060 lvm[1415]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 06:08:23.135120 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 06:08:23.137199 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 06:08:23.141057 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 06:08:23.145067 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 06:08:23.146805 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 06:08:23.149164 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 06:08:23.149985 jq[1418]: false Jul 7 06:08:23.151032 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 06:08:23.156318 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 06:08:23.164931 dbus-daemon[1417]: [system] SELinux support is enabled Jul 7 06:08:23.167493 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 06:08:23.168004 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 06:08:23.170383 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 06:08:23.172988 extend-filesystems[1419]: Found loop3 Jul 7 06:08:23.173946 extend-filesystems[1419]: Found loop4 Jul 7 06:08:23.173946 extend-filesystems[1419]: Found loop5 Jul 7 06:08:23.173946 extend-filesystems[1419]: Found vda Jul 7 06:08:23.173946 extend-filesystems[1419]: Found vda1 Jul 7 06:08:23.173946 extend-filesystems[1419]: Found vda2 Jul 7 06:08:23.173946 extend-filesystems[1419]: Found vda3 Jul 7 06:08:23.173946 extend-filesystems[1419]: Found usr Jul 7 06:08:23.173946 extend-filesystems[1419]: Found vda4 Jul 7 06:08:23.173946 extend-filesystems[1419]: Found vda6 Jul 7 06:08:23.173946 extend-filesystems[1419]: Found vda7 Jul 7 06:08:23.173946 extend-filesystems[1419]: Found vda9 Jul 7 06:08:23.173946 extend-filesystems[1419]: Checking size of /dev/vda9 Jul 7 06:08:23.174308 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 06:08:23.178520 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 06:08:23.183997 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 7 06:08:23.188367 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 06:08:23.188562 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 06:08:23.188835 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 06:08:23.188997 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 06:08:23.190863 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 06:08:23.191017 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 06:08:23.203734 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 06:08:23.203807 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 06:08:23.208308 jq[1432]: true Jul 7 06:08:23.208147 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 06:08:23.208175 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 06:08:23.209287 extend-filesystems[1419]: Resized partition /dev/vda9 Jul 7 06:08:23.222986 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1365) Jul 7 06:08:23.221334 (ntainerd)[1443]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 06:08:23.222842 systemd-logind[1425]: Watching system buttons on /dev/input/event0 (Power Button) Jul 7 06:08:23.231290 jq[1450]: true Jul 7 06:08:23.233311 tar[1439]: linux-arm64/LICENSE Jul 7 06:08:23.233311 tar[1439]: linux-arm64/helm Jul 7 06:08:23.236835 extend-filesystems[1453]: resize2fs 1.47.1 (20-May-2024) Jul 7 06:08:23.241828 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 7 06:08:23.238525 systemd-logind[1425]: New seat seat0. Jul 7 06:08:23.252102 update_engine[1429]: I20250707 06:08:23.251108 1429 main.cc:92] Flatcar Update Engine starting Jul 7 06:08:23.250298 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 06:08:23.254245 update_engine[1429]: I20250707 06:08:23.254179 1429 update_check_scheduler.cc:74] Next update check in 10m40s Jul 7 06:08:23.258630 systemd[1]: Started update-engine.service - Update Engine. Jul 7 06:08:23.269367 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 06:08:23.276989 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 7 06:08:23.329896 extend-filesystems[1453]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 7 06:08:23.329896 extend-filesystems[1453]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 7 06:08:23.329896 extend-filesystems[1453]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 7 06:08:23.332554 extend-filesystems[1419]: Resized filesystem in /dev/vda9 Jul 7 06:08:23.332116 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 06:08:23.332297 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 06:08:23.339125 bash[1471]: Updated "/home/core/.ssh/authorized_keys" Jul 7 06:08:23.342071 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 06:08:23.343556 locksmithd[1461]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 06:08:23.344589 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 7 06:08:23.443492 containerd[1443]: time="2025-07-07T06:08:23.443399520Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 7 06:08:23.468400 containerd[1443]: time="2025-07-07T06:08:23.468162200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 7 06:08:23.470440 containerd[1443]: time="2025-07-07T06:08:23.470392160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:08:23.470440 containerd[1443]: time="2025-07-07T06:08:23.470439880Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 7 06:08:23.470548 containerd[1443]: time="2025-07-07T06:08:23.470464400Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 7 06:08:23.470648 containerd[1443]: time="2025-07-07T06:08:23.470625920Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 7 06:08:23.470684 containerd[1443]: time="2025-07-07T06:08:23.470649960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 7 06:08:23.470723 containerd[1443]: time="2025-07-07T06:08:23.470707480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:08:23.470754 containerd[1443]: time="2025-07-07T06:08:23.470724320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 7 06:08:23.470918 containerd[1443]: time="2025-07-07T06:08:23.470894720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:08:23.470918 containerd[1443]: time="2025-07-07T06:08:23.470915000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 7 06:08:23.471002 containerd[1443]: time="2025-07-07T06:08:23.470928880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:08:23.471002 containerd[1443]: time="2025-07-07T06:08:23.470938400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 7 06:08:23.471045 containerd[1443]: time="2025-07-07T06:08:23.471031080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 7 06:08:23.471248 containerd[1443]: time="2025-07-07T06:08:23.471225600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 7 06:08:23.471376 containerd[1443]: time="2025-07-07T06:08:23.471337040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:08:23.471376 containerd[1443]: time="2025-07-07T06:08:23.471356400Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 7 06:08:23.471471 containerd[1443]: time="2025-07-07T06:08:23.471455520Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 7 06:08:23.471519 containerd[1443]: time="2025-07-07T06:08:23.471507600Z" level=info msg="metadata content store policy set" policy=shared Jul 7 06:08:23.474677 containerd[1443]: time="2025-07-07T06:08:23.474602400Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 7 06:08:23.474677 containerd[1443]: time="2025-07-07T06:08:23.474651720Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 7 06:08:23.474677 containerd[1443]: time="2025-07-07T06:08:23.474668320Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 7 06:08:23.474882 containerd[1443]: time="2025-07-07T06:08:23.474685000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 7 06:08:23.474882 containerd[1443]: time="2025-07-07T06:08:23.474700080Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 7 06:08:23.474882 containerd[1443]: time="2025-07-07T06:08:23.474840160Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 7 06:08:23.475872 containerd[1443]: time="2025-07-07T06:08:23.475129000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 7 06:08:23.475872 containerd[1443]: time="2025-07-07T06:08:23.475272080Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 7 06:08:23.475872 containerd[1443]: time="2025-07-07T06:08:23.475289400Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 7 06:08:23.475872 containerd[1443]: time="2025-07-07T06:08:23.475303120Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 7 06:08:23.475872 containerd[1443]: time="2025-07-07T06:08:23.475318160Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 7 06:08:23.475872 containerd[1443]: time="2025-07-07T06:08:23.475332520Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 7 06:08:23.475872 containerd[1443]: time="2025-07-07T06:08:23.475344880Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 7 06:08:23.475872 containerd[1443]: time="2025-07-07T06:08:23.475358360Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 7 06:08:23.475872 containerd[1443]: time="2025-07-07T06:08:23.475372160Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 7 06:08:23.475872 containerd[1443]: time="2025-07-07T06:08:23.475384480Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 7 06:08:23.475872 containerd[1443]: time="2025-07-07T06:08:23.475396600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 7 06:08:23.475872 containerd[1443]: time="2025-07-07T06:08:23.475409240Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 7 06:08:23.475872 containerd[1443]: time="2025-07-07T06:08:23.475429880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 7 06:08:23.475872 containerd[1443]: time="2025-07-07T06:08:23.475443400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 7 06:08:23.476170 containerd[1443]: time="2025-07-07T06:08:23.475455320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 7 06:08:23.476170 containerd[1443]: time="2025-07-07T06:08:23.475468040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 7 06:08:23.476170 containerd[1443]: time="2025-07-07T06:08:23.475480840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 7 06:08:23.476170 containerd[1443]: time="2025-07-07T06:08:23.475493760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 7 06:08:23.476170 containerd[1443]: time="2025-07-07T06:08:23.475506120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 7 06:08:23.476170 containerd[1443]: time="2025-07-07T06:08:23.475518640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 7 06:08:23.476170 containerd[1443]: time="2025-07-07T06:08:23.475531880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 7 06:08:23.476170 containerd[1443]: time="2025-07-07T06:08:23.475546880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 7 06:08:23.476170 containerd[1443]: time="2025-07-07T06:08:23.475557960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 7 06:08:23.476170 containerd[1443]: time="2025-07-07T06:08:23.475569520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 7 06:08:23.476170 containerd[1443]: time="2025-07-07T06:08:23.475583000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 7 06:08:23.476170 containerd[1443]: time="2025-07-07T06:08:23.475598560Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 7 06:08:23.476170 containerd[1443]: time="2025-07-07T06:08:23.475619200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 7 06:08:23.476170 containerd[1443]: time="2025-07-07T06:08:23.475630520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 7 06:08:23.476170 containerd[1443]: time="2025-07-07T06:08:23.475641000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 7 06:08:23.476412 containerd[1443]: time="2025-07-07T06:08:23.475773240Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 7 06:08:23.476412 containerd[1443]: time="2025-07-07T06:08:23.475793000Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 7 06:08:23.476412 containerd[1443]: time="2025-07-07T06:08:23.475804600Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 7 06:08:23.476412 containerd[1443]: time="2025-07-07T06:08:23.475818360Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 7 06:08:23.476412 containerd[1443]: time="2025-07-07T06:08:23.475828200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 7 06:08:23.476641 containerd[1443]: time="2025-07-07T06:08:23.475847240Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 7 06:08:23.476709 containerd[1443]: time="2025-07-07T06:08:23.476694360Z" level=info msg="NRI interface is disabled by configuration." Jul 7 06:08:23.476772 containerd[1443]: time="2025-07-07T06:08:23.476758240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 7 06:08:23.477387 containerd[1443]: time="2025-07-07T06:08:23.477272480Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 7 06:08:23.477795 containerd[1443]: time="2025-07-07T06:08:23.477598640Z" level=info msg="Connect containerd service" Jul 7 06:08:23.477795 containerd[1443]: time="2025-07-07T06:08:23.477644520Z" level=info msg="using legacy CRI server" Jul 7 06:08:23.477795 containerd[1443]: time="2025-07-07T06:08:23.477652520Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 06:08:23.478031 containerd[1443]: time="2025-07-07T06:08:23.478008520Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 7 06:08:23.479156 containerd[1443]: time="2025-07-07T06:08:23.479126280Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:08:23.479456 containerd[1443]: time="2025-07-07T06:08:23.479415840Z" level=info msg="Start subscribing containerd event" Jul 7 06:08:23.479768 containerd[1443]: time="2025-07-07T06:08:23.479587880Z" level=info msg="Start recovering state" Jul 7 06:08:23.479768 containerd[1443]: time="2025-07-07T06:08:23.479663160Z" level=info msg="Start event monitor" Jul 7 06:08:23.479768 containerd[1443]: time="2025-07-07T06:08:23.479682360Z" level=info msg="Start snapshots syncer" Jul 7 06:08:23.479768 containerd[1443]: time="2025-07-07T06:08:23.479693280Z" level=info msg="Start cni network conf syncer for default" Jul 7 06:08:23.479895 containerd[1443]: time="2025-07-07T06:08:23.479700400Z" level=info msg="Start streaming server" Jul 7 06:08:23.480640 containerd[1443]: time="2025-07-07T06:08:23.480601960Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 06:08:23.481027 containerd[1443]: time="2025-07-07T06:08:23.480909040Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 06:08:23.481261 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 06:08:23.482500 containerd[1443]: time="2025-07-07T06:08:23.482371320Z" level=info msg="containerd successfully booted in 0.039888s" Jul 7 06:08:23.665670 tar[1439]: linux-arm64/README.md Jul 7 06:08:23.677512 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 06:08:23.806621 sshd_keygen[1431]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 06:08:23.825239 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 06:08:23.835256 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 06:08:23.841901 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 06:08:23.842156 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 06:08:23.844733 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 06:08:23.858302 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 06:08:23.860916 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 06:08:23.862853 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 7 06:08:23.864055 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 06:08:24.337133 systemd-networkd[1386]: eth0: Gained IPv6LL Jul 7 06:08:24.340808 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 06:08:24.342464 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 06:08:24.353361 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 7 06:08:24.355827 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:08:24.358089 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 06:08:24.374613 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 7 06:08:24.374804 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 7 06:08:24.376726 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 06:08:24.381130 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 06:08:24.944848 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:08:24.946170 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 06:08:24.950743 (kubelet)[1528]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:08:24.951086 systemd[1]: Startup finished in 558ms (kernel) + 4.842s (initrd) + 3.437s (userspace) = 8.838s. Jul 7 06:08:25.353391 kubelet[1528]: E0707 06:08:25.353279 1528 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:08:25.355851 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:08:25.356031 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:08:30.203744 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 06:08:30.204871 systemd[1]: Started sshd@0-10.0.0.114:22-10.0.0.1:33694.service - OpenSSH per-connection server daemon (10.0.0.1:33694). Jul 7 06:08:30.253080 sshd[1542]: Accepted publickey for core from 10.0.0.1 port 33694 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:08:30.255093 sshd[1542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:30.263653 systemd-logind[1425]: New session 1 of user core. Jul 7 06:08:30.264706 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 06:08:30.275235 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 06:08:30.286011 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 06:08:30.288415 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 06:08:30.295150 (systemd)[1546]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 06:08:30.374084 systemd[1546]: Queued start job for default target default.target. Jul 7 06:08:30.385954 systemd[1546]: Created slice app.slice - User Application Slice. Jul 7 06:08:30.386006 systemd[1546]: Reached target paths.target - Paths. Jul 7 06:08:30.386020 systemd[1546]: Reached target timers.target - Timers. Jul 7 06:08:30.387342 systemd[1546]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 06:08:30.398453 systemd[1546]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 06:08:30.398577 systemd[1546]: Reached target sockets.target - Sockets. Jul 7 06:08:30.398591 systemd[1546]: Reached target basic.target - Basic System. Jul 7 06:08:30.398627 systemd[1546]: Reached target default.target - Main User Target. Jul 7 06:08:30.398655 systemd[1546]: Startup finished in 98ms. Jul 7 06:08:30.398862 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 06:08:30.414203 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 06:08:30.483892 systemd[1]: Started sshd@1-10.0.0.114:22-10.0.0.1:33708.service - OpenSSH per-connection server daemon (10.0.0.1:33708). Jul 7 06:08:30.527317 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 33708 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:08:30.528694 sshd[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:30.532479 systemd-logind[1425]: New session 2 of user core. Jul 7 06:08:30.541151 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 06:08:30.593865 sshd[1557]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:30.602431 systemd[1]: sshd@1-10.0.0.114:22-10.0.0.1:33708.service: Deactivated successfully. Jul 7 06:08:30.603799 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 06:08:30.606126 systemd-logind[1425]: Session 2 logged out. Waiting for processes to exit. Jul 7 06:08:30.606537 systemd[1]: Started sshd@2-10.0.0.114:22-10.0.0.1:33722.service - OpenSSH per-connection server daemon (10.0.0.1:33722). Jul 7 06:08:30.607954 systemd-logind[1425]: Removed session 2. Jul 7 06:08:30.642124 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 33722 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:08:30.643393 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:30.647709 systemd-logind[1425]: New session 3 of user core. Jul 7 06:08:30.662150 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 06:08:30.711314 sshd[1564]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:30.721567 systemd[1]: sshd@2-10.0.0.114:22-10.0.0.1:33722.service: Deactivated successfully. Jul 7 06:08:30.724107 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 06:08:30.725297 systemd-logind[1425]: Session 3 logged out. Waiting for processes to exit. Jul 7 06:08:30.726514 systemd[1]: Started sshd@3-10.0.0.114:22-10.0.0.1:33734.service - OpenSSH per-connection server daemon (10.0.0.1:33734). Jul 7 06:08:30.727294 systemd-logind[1425]: Removed session 3. Jul 7 06:08:30.762644 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 33734 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:08:30.763857 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:30.767924 systemd-logind[1425]: New session 4 of user core. Jul 7 06:08:30.778150 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 06:08:30.830417 sshd[1571]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:30.843536 systemd[1]: sshd@3-10.0.0.114:22-10.0.0.1:33734.service: Deactivated successfully. Jul 7 06:08:30.845291 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 06:08:30.846799 systemd-logind[1425]: Session 4 logged out. Waiting for processes to exit. Jul 7 06:08:30.848239 systemd[1]: Started sshd@4-10.0.0.114:22-10.0.0.1:33744.service - OpenSSH per-connection server daemon (10.0.0.1:33744). Jul 7 06:08:30.849055 systemd-logind[1425]: Removed session 4. Jul 7 06:08:30.884260 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 33744 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:08:30.885551 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:30.889533 systemd-logind[1425]: New session 5 of user core. Jul 7 06:08:30.901223 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 06:08:30.971391 sudo[1581]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 06:08:30.971678 sudo[1581]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:08:30.982932 sudo[1581]: pam_unix(sudo:session): session closed for user root Jul 7 06:08:30.984778 sshd[1578]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:31.001403 systemd[1]: sshd@4-10.0.0.114:22-10.0.0.1:33744.service: Deactivated successfully. Jul 7 06:08:31.002861 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 06:08:31.004248 systemd-logind[1425]: Session 5 logged out. Waiting for processes to exit. Jul 7 06:08:31.005514 systemd[1]: Started sshd@5-10.0.0.114:22-10.0.0.1:33752.service - OpenSSH per-connection server daemon (10.0.0.1:33752). Jul 7 06:08:31.006194 systemd-logind[1425]: Removed session 5. Jul 7 06:08:31.044655 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 33752 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:08:31.046061 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:31.053297 systemd-logind[1425]: New session 6 of user core. Jul 7 06:08:31.064192 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 06:08:31.115690 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 06:08:31.116013 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:08:31.119122 sudo[1590]: pam_unix(sudo:session): session closed for user root Jul 7 06:08:31.123687 sudo[1589]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 7 06:08:31.123964 sudo[1589]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:08:31.142331 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 7 06:08:31.143589 auditctl[1593]: No rules Jul 7 06:08:31.144526 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:08:31.144753 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 7 06:08:31.146436 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 06:08:31.169492 augenrules[1611]: No rules Jul 7 06:08:31.170311 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 06:08:31.171546 sudo[1589]: pam_unix(sudo:session): session closed for user root Jul 7 06:08:31.173328 sshd[1586]: pam_unix(sshd:session): session closed for user core Jul 7 06:08:31.192622 systemd[1]: sshd@5-10.0.0.114:22-10.0.0.1:33752.service: Deactivated successfully. Jul 7 06:08:31.194321 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 06:08:31.195643 systemd-logind[1425]: Session 6 logged out. Waiting for processes to exit. Jul 7 06:08:31.207355 systemd[1]: Started sshd@6-10.0.0.114:22-10.0.0.1:33768.service - OpenSSH per-connection server daemon (10.0.0.1:33768). Jul 7 06:08:31.208255 systemd-logind[1425]: Removed session 6. Jul 7 06:08:31.240688 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 33768 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:08:31.242118 sshd[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:08:31.246453 systemd-logind[1425]: New session 7 of user core. Jul 7 06:08:31.256734 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 06:08:31.308516 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 06:08:31.308809 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:08:31.640247 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 06:08:31.640378 (dockerd)[1640]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 06:08:31.903212 dockerd[1640]: time="2025-07-07T06:08:31.903072362Z" level=info msg="Starting up" Jul 7 06:08:32.055556 dockerd[1640]: time="2025-07-07T06:08:32.055493599Z" level=info msg="Loading containers: start." Jul 7 06:08:32.226057 kernel: Initializing XFRM netlink socket Jul 7 06:08:32.291798 systemd-networkd[1386]: docker0: Link UP Jul 7 06:08:32.312331 dockerd[1640]: time="2025-07-07T06:08:32.312275181Z" level=info msg="Loading containers: done." Jul 7 06:08:32.324840 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3907849454-merged.mount: Deactivated successfully. Jul 7 06:08:32.326589 dockerd[1640]: time="2025-07-07T06:08:32.326455546Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 06:08:32.326589 dockerd[1640]: time="2025-07-07T06:08:32.326573284Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 7 06:08:32.326710 dockerd[1640]: time="2025-07-07T06:08:32.326675959Z" level=info msg="Daemon has completed initialization" Jul 7 06:08:32.355715 dockerd[1640]: time="2025-07-07T06:08:32.355537114Z" level=info msg="API listen on /run/docker.sock" Jul 7 06:08:32.355791 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 06:08:32.965002 containerd[1443]: time="2025-07-07T06:08:32.964944056Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 7 06:08:33.543815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3738406344.mount: Deactivated successfully. Jul 7 06:08:34.553804 containerd[1443]: time="2025-07-07T06:08:34.553751049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:34.554702 containerd[1443]: time="2025-07-07T06:08:34.554267465Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328196" Jul 7 06:08:34.557004 containerd[1443]: time="2025-07-07T06:08:34.555277316Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:34.558161 containerd[1443]: time="2025-07-07T06:08:34.558130629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:34.559519 containerd[1443]: time="2025-07-07T06:08:34.559480091Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 1.594475981s" Jul 7 06:08:34.559568 containerd[1443]: time="2025-07-07T06:08:34.559521601Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 7 06:08:34.560569 containerd[1443]: time="2025-07-07T06:08:34.560547312Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 7 06:08:35.606493 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 06:08:35.614173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:08:35.623671 containerd[1443]: time="2025-07-07T06:08:35.623611659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:35.624659 containerd[1443]: time="2025-07-07T06:08:35.624126616Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529230" Jul 7 06:08:35.625194 containerd[1443]: time="2025-07-07T06:08:35.625033829Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:35.640353 containerd[1443]: time="2025-07-07T06:08:35.640298413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:35.641826 containerd[1443]: time="2025-07-07T06:08:35.641765348Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.081182331s" Jul 7 06:08:35.641826 containerd[1443]: time="2025-07-07T06:08:35.641804901Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 7 06:08:35.642342 containerd[1443]: time="2025-07-07T06:08:35.642313918Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 7 06:08:35.718875 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:08:35.722819 (kubelet)[1857]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:08:35.759834 kubelet[1857]: E0707 06:08:35.759768 1857 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:08:35.762606 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:08:35.762753 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:08:36.856721 containerd[1443]: time="2025-07-07T06:08:36.856629622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:36.857193 containerd[1443]: time="2025-07-07T06:08:36.857137119Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484143" Jul 7 06:08:36.858036 containerd[1443]: time="2025-07-07T06:08:36.857998537Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:36.863698 containerd[1443]: time="2025-07-07T06:08:36.863618537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:36.864810 containerd[1443]: time="2025-07-07T06:08:36.864770282Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.222419332s" Jul 7 06:08:36.864877 containerd[1443]: time="2025-07-07T06:08:36.864819228Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 7 06:08:36.865338 containerd[1443]: time="2025-07-07T06:08:36.865300618Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 7 06:08:37.868435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1675407875.mount: Deactivated successfully. Jul 7 06:08:38.086577 containerd[1443]: time="2025-07-07T06:08:38.086507219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:38.087091 containerd[1443]: time="2025-07-07T06:08:38.087054353Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378408" Jul 7 06:08:38.087817 containerd[1443]: time="2025-07-07T06:08:38.087787449Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:38.089728 containerd[1443]: time="2025-07-07T06:08:38.089693296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:38.090562 containerd[1443]: time="2025-07-07T06:08:38.090526742Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.22519136s" Jul 7 06:08:38.090601 containerd[1443]: time="2025-07-07T06:08:38.090565843Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 7 06:08:38.091078 containerd[1443]: time="2025-07-07T06:08:38.091046573Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 06:08:38.644447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1971031944.mount: Deactivated successfully. Jul 7 06:08:39.427855 containerd[1443]: time="2025-07-07T06:08:39.427802613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:39.428342 containerd[1443]: time="2025-07-07T06:08:39.428255381Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 7 06:08:39.429140 containerd[1443]: time="2025-07-07T06:08:39.429086361Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:39.432860 containerd[1443]: time="2025-07-07T06:08:39.432824582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:39.434131 containerd[1443]: time="2025-07-07T06:08:39.433989756Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.342905137s" Jul 7 06:08:39.434131 containerd[1443]: time="2025-07-07T06:08:39.434032848Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 7 06:08:39.434677 containerd[1443]: time="2025-07-07T06:08:39.434569145Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 06:08:39.841128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1939830336.mount: Deactivated successfully. Jul 7 06:08:39.845614 containerd[1443]: time="2025-07-07T06:08:39.845563452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:39.846609 containerd[1443]: time="2025-07-07T06:08:39.846574042Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 7 06:08:39.847436 containerd[1443]: time="2025-07-07T06:08:39.847391704Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:39.849416 containerd[1443]: time="2025-07-07T06:08:39.849385605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:39.850352 containerd[1443]: time="2025-07-07T06:08:39.850265792Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 415.662447ms" Jul 7 06:08:39.850352 containerd[1443]: time="2025-07-07T06:08:39.850301119Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 7 06:08:39.850710 containerd[1443]: time="2025-07-07T06:08:39.850686130Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 7 06:08:40.338535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount45619499.mount: Deactivated successfully. Jul 7 06:08:42.085437 containerd[1443]: time="2025-07-07T06:08:42.085389648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:42.086497 containerd[1443]: time="2025-07-07T06:08:42.086109915Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" Jul 7 06:08:42.088034 containerd[1443]: time="2025-07-07T06:08:42.086781672Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:42.090032 containerd[1443]: time="2025-07-07T06:08:42.090000507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:08:42.091446 containerd[1443]: time="2025-07-07T06:08:42.091321974Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.240604397s" Jul 7 06:08:42.091446 containerd[1443]: time="2025-07-07T06:08:42.091357433Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 7 06:08:46.013525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 06:08:46.023389 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:08:46.165034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:08:46.168562 (kubelet)[2020]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:08:46.205761 kubelet[2020]: E0707 06:08:46.205710 2020 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:08:46.208196 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:08:46.208324 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:08:48.615810 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:08:48.629203 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:08:48.649312 systemd[1]: Reloading requested from client PID 2035 ('systemctl') (unit session-7.scope)... Jul 7 06:08:48.649331 systemd[1]: Reloading... Jul 7 06:08:48.724373 zram_generator::config[2077]: No configuration found. Jul 7 06:08:48.956977 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:08:49.011193 systemd[1]: Reloading finished in 361 ms. Jul 7 06:08:49.058892 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:08:49.061420 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 06:08:49.061657 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:08:49.063663 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:08:49.171056 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:08:49.174869 (kubelet)[2121]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:08:49.209150 kubelet[2121]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:08:49.209150 kubelet[2121]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 06:08:49.209150 kubelet[2121]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:08:49.209150 kubelet[2121]: I0707 06:08:49.209113 2121 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:08:49.830137 kubelet[2121]: I0707 06:08:49.830086 2121 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 06:08:49.830137 kubelet[2121]: I0707 06:08:49.830124 2121 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:08:49.830419 kubelet[2121]: I0707 06:08:49.830389 2121 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 06:08:49.861279 kubelet[2121]: E0707 06:08:49.861240 2121 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.114:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:08:49.864082 kubelet[2121]: I0707 06:08:49.863832 2121 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:08:49.868196 kubelet[2121]: E0707 06:08:49.868149 2121 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 06:08:49.868196 kubelet[2121]: I0707 06:08:49.868190 2121 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 06:08:49.872874 kubelet[2121]: I0707 06:08:49.872844 2121 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:08:49.874592 kubelet[2121]: I0707 06:08:49.874024 2121 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:08:49.874592 kubelet[2121]: I0707 06:08:49.874066 2121 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:08:49.874592 kubelet[2121]: I0707 06:08:49.874321 2121 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:08:49.874592 kubelet[2121]: I0707 06:08:49.874331 2121 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 06:08:49.874817 kubelet[2121]: I0707 06:08:49.874522 2121 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:08:49.877020 kubelet[2121]: I0707 06:08:49.876980 2121 kubelet.go:446] "Attempting to sync node with API server" Jul 7 06:08:49.877063 kubelet[2121]: I0707 06:08:49.877038 2121 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:08:49.877063 kubelet[2121]: I0707 06:08:49.877061 2121 kubelet.go:352] "Adding apiserver pod source" Jul 7 06:08:49.877105 kubelet[2121]: I0707 06:08:49.877071 2121 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:08:49.881935 kubelet[2121]: W0707 06:08:49.881518 2121 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Jul 7 06:08:49.881935 kubelet[2121]: E0707 06:08:49.881582 2121 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:08:49.882392 kubelet[2121]: I0707 06:08:49.882371 2121 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 06:08:49.883296 kubelet[2121]: I0707 06:08:49.883276 2121 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 06:08:49.883616 kubelet[2121]: W0707 06:08:49.883605 2121 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 06:08:49.885007 kubelet[2121]: I0707 06:08:49.884987 2121 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 06:08:49.885476 kubelet[2121]: I0707 06:08:49.885108 2121 server.go:1287] "Started kubelet" Jul 7 06:08:49.885893 kubelet[2121]: I0707 06:08:49.885843 2121 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:08:49.886935 kubelet[2121]: I0707 06:08:49.886702 2121 server.go:479] "Adding debug handlers to kubelet server" Jul 7 06:08:49.886935 kubelet[2121]: W0707 06:08:49.886756 2121 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Jul 7 06:08:49.886935 kubelet[2121]: E0707 06:08:49.886813 2121 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:08:49.888729 kubelet[2121]: I0707 06:08:49.888700 2121 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:08:49.891604 kubelet[2121]: I0707 06:08:49.890765 2121 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:08:49.891604 kubelet[2121]: I0707 06:08:49.891010 2121 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:08:49.891604 kubelet[2121]: I0707 06:08:49.891169 2121 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:08:49.891724 kubelet[2121]: E0707 06:08:49.891637 2121 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:08:49.891724 kubelet[2121]: I0707 06:08:49.891711 2121 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 06:08:49.892037 kubelet[2121]: I0707 06:08:49.891980 2121 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 06:08:49.892098 kubelet[2121]: I0707 06:08:49.892053 2121 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:08:49.893971 kubelet[2121]: W0707 06:08:49.892696 2121 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Jul 7 06:08:49.893971 kubelet[2121]: E0707 06:08:49.892748 2121 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:08:49.893971 kubelet[2121]: I0707 06:08:49.893378 2121 factory.go:221] Registration of the systemd container factory successfully Jul 7 06:08:49.893971 kubelet[2121]: I0707 06:08:49.893466 2121 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:08:49.894305 kubelet[2121]: E0707 06:08:49.894064 2121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="200ms" Jul 7 06:08:49.895054 kubelet[2121]: I0707 06:08:49.894975 2121 factory.go:221] Registration of the containerd container factory successfully Jul 7 06:08:49.895054 kubelet[2121]: E0707 06:08:49.894753 2121 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.114:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.114:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fe32568fc7556 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-07 06:08:49.885082966 +0000 UTC m=+0.707088325,LastTimestamp:2025-07-07 06:08:49.885082966 +0000 UTC m=+0.707088325,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 7 06:08:49.895585 kubelet[2121]: E0707 06:08:49.895480 2121 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:08:49.905910 kubelet[2121]: I0707 06:08:49.905859 2121 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 06:08:49.906908 kubelet[2121]: I0707 06:08:49.906887 2121 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 06:08:49.906908 kubelet[2121]: I0707 06:08:49.906912 2121 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 06:08:49.907000 kubelet[2121]: I0707 06:08:49.906931 2121 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 06:08:49.907000 kubelet[2121]: I0707 06:08:49.906938 2121 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 06:08:49.907000 kubelet[2121]: E0707 06:08:49.906988 2121 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:08:49.909817 kubelet[2121]: W0707 06:08:49.909682 2121 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Jul 7 06:08:49.909817 kubelet[2121]: E0707 06:08:49.909737 2121 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:08:49.910047 kubelet[2121]: I0707 06:08:49.910024 2121 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 06:08:49.910047 kubelet[2121]: I0707 06:08:49.910040 2121 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 06:08:49.910132 kubelet[2121]: I0707 06:08:49.910057 2121 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:08:49.987912 kubelet[2121]: I0707 06:08:49.987875 2121 policy_none.go:49] "None policy: Start" Jul 7 06:08:49.987912 kubelet[2121]: I0707 06:08:49.987905 2121 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 06:08:49.987912 kubelet[2121]: I0707 06:08:49.987924 2121 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:08:49.992620 kubelet[2121]: E0707 06:08:49.992592 2121 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:08:49.993248 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 06:08:50.004010 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 06:08:50.008005 kubelet[2121]: E0707 06:08:50.007111 2121 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 06:08:50.009119 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 06:08:50.022879 kubelet[2121]: I0707 06:08:50.022849 2121 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 06:08:50.023103 kubelet[2121]: I0707 06:08:50.023075 2121 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:08:50.023137 kubelet[2121]: I0707 06:08:50.023095 2121 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:08:50.023698 kubelet[2121]: I0707 06:08:50.023568 2121 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:08:50.024536 kubelet[2121]: E0707 06:08:50.024516 2121 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 06:08:50.024648 kubelet[2121]: E0707 06:08:50.024635 2121 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 7 06:08:50.095022 kubelet[2121]: E0707 06:08:50.094889 2121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="400ms" Jul 7 06:08:50.125143 kubelet[2121]: I0707 06:08:50.125102 2121 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:08:50.125548 kubelet[2121]: E0707 06:08:50.125519 2121 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.114:6443/api/v1/nodes\": dial tcp 10.0.0.114:6443: connect: connection refused" node="localhost" Jul 7 06:08:50.216122 systemd[1]: Created slice kubepods-burstable-pod7994124231d952a7d6c6066ce5376cad.slice - libcontainer container kubepods-burstable-pod7994124231d952a7d6c6066ce5376cad.slice. Jul 7 06:08:50.226636 kubelet[2121]: E0707 06:08:50.226594 2121 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:08:50.229347 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 7 06:08:50.237027 kubelet[2121]: E0707 06:08:50.237000 2121 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:08:50.239361 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 7 06:08:50.241146 kubelet[2121]: E0707 06:08:50.241114 2121 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:08:50.294581 kubelet[2121]: I0707 06:08:50.294542 2121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7994124231d952a7d6c6066ce5376cad-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7994124231d952a7d6c6066ce5376cad\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:08:50.294581 kubelet[2121]: I0707 06:08:50.294578 2121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7994124231d952a7d6c6066ce5376cad-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7994124231d952a7d6c6066ce5376cad\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:08:50.294721 kubelet[2121]: I0707 06:08:50.294602 2121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:08:50.294721 kubelet[2121]: I0707 06:08:50.294619 2121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:08:50.294721 kubelet[2121]: I0707 06:08:50.294636 2121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:08:50.294721 kubelet[2121]: I0707 06:08:50.294651 2121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 7 06:08:50.294721 kubelet[2121]: I0707 06:08:50.294665 2121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7994124231d952a7d6c6066ce5376cad-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7994124231d952a7d6c6066ce5376cad\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:08:50.294823 kubelet[2121]: I0707 06:08:50.294681 2121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:08:50.294823 kubelet[2121]: I0707 06:08:50.294698 2121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:08:50.327690 kubelet[2121]: I0707 06:08:50.327646 2121 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:08:50.328069 kubelet[2121]: E0707 06:08:50.328038 2121 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.114:6443/api/v1/nodes\": dial tcp 10.0.0.114:6443: connect: connection refused" node="localhost" Jul 7 06:08:50.495781 kubelet[2121]: E0707 06:08:50.495662 2121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="800ms" Jul 7 06:08:50.528058 kubelet[2121]: E0707 06:08:50.528020 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:50.528705 containerd[1443]: time="2025-07-07T06:08:50.528662294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7994124231d952a7d6c6066ce5376cad,Namespace:kube-system,Attempt:0,}" Jul 7 06:08:50.537933 kubelet[2121]: E0707 06:08:50.537908 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:50.539451 containerd[1443]: time="2025-07-07T06:08:50.539287302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 7 06:08:50.541583 kubelet[2121]: E0707 06:08:50.541561 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:50.541941 containerd[1443]: time="2025-07-07T06:08:50.541911090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 7 06:08:50.730136 kubelet[2121]: I0707 06:08:50.730097 2121 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:08:50.730458 kubelet[2121]: E0707 06:08:50.730414 2121 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.114:6443/api/v1/nodes\": dial tcp 10.0.0.114:6443: connect: connection refused" node="localhost" Jul 7 06:08:50.982119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4119230811.mount: Deactivated successfully. Jul 7 06:08:50.988360 containerd[1443]: time="2025-07-07T06:08:50.988310865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:08:50.989087 containerd[1443]: time="2025-07-07T06:08:50.988897298Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 7 06:08:50.989775 containerd[1443]: time="2025-07-07T06:08:50.989733990Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:08:50.990629 containerd[1443]: time="2025-07-07T06:08:50.990592511Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:08:50.991567 containerd[1443]: time="2025-07-07T06:08:50.991424396Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:08:50.991567 containerd[1443]: time="2025-07-07T06:08:50.991554452Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 06:08:50.992146 containerd[1443]: time="2025-07-07T06:08:50.992113928Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 06:08:50.993640 containerd[1443]: time="2025-07-07T06:08:50.993605505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:08:50.996819 containerd[1443]: time="2025-07-07T06:08:50.996705417Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 457.349423ms" Jul 7 06:08:50.998122 containerd[1443]: time="2025-07-07T06:08:50.998057846Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 469.307593ms" Jul 7 06:08:51.000894 containerd[1443]: time="2025-07-07T06:08:51.000758498Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 458.768141ms" Jul 7 06:08:51.013707 kubelet[2121]: W0707 06:08:51.013648 2121 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Jul 7 06:08:51.013807 kubelet[2121]: E0707 06:08:51.013715 2121 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.114:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:08:51.112374 kubelet[2121]: E0707 06:08:51.112248 2121 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.114:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.114:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fe32568fc7556 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-07 06:08:49.885082966 +0000 UTC m=+0.707088325,LastTimestamp:2025-07-07 06:08:49.885082966 +0000 UTC m=+0.707088325,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 7 06:08:51.144948 containerd[1443]: time="2025-07-07T06:08:51.144838163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:08:51.144948 containerd[1443]: time="2025-07-07T06:08:51.144914493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:08:51.144948 containerd[1443]: time="2025-07-07T06:08:51.144926427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:08:51.145701 containerd[1443]: time="2025-07-07T06:08:51.145649042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:08:51.146415 containerd[1443]: time="2025-07-07T06:08:51.146156002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:08:51.146499 containerd[1443]: time="2025-07-07T06:08:51.146446346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:08:51.146550 containerd[1443]: time="2025-07-07T06:08:51.146512384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:08:51.146550 containerd[1443]: time="2025-07-07T06:08:51.146538815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:08:51.146629 containerd[1443]: time="2025-07-07T06:08:51.146590757Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:08:51.146629 containerd[1443]: time="2025-07-07T06:08:51.146612062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:08:51.147110 containerd[1443]: time="2025-07-07T06:08:51.146664604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:08:51.147110 containerd[1443]: time="2025-07-07T06:08:51.146702689Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:08:51.169140 systemd[1]: Started cri-containerd-657dfbb348683bd709b6cb8cde80baec689840a26ed6f11dd1751a814d409b57.scope - libcontainer container 657dfbb348683bd709b6cb8cde80baec689840a26ed6f11dd1751a814d409b57. Jul 7 06:08:51.170674 systemd[1]: Started cri-containerd-b3a73783a1144032fc828b26ecc7f9af75f8111012f37d59572d73dbb74f84a5.scope - libcontainer container b3a73783a1144032fc828b26ecc7f9af75f8111012f37d59572d73dbb74f84a5. Jul 7 06:08:51.174232 systemd[1]: Started cri-containerd-2e44cc71b7947f979ebc0e1435f3d64cb72c6c133017d623b777374bfb966526.scope - libcontainer container 2e44cc71b7947f979ebc0e1435f3d64cb72c6c133017d623b777374bfb966526. Jul 7 06:08:51.177573 kubelet[2121]: W0707 06:08:51.177481 2121 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Jul 7 06:08:51.177573 kubelet[2121]: E0707 06:08:51.177542 2121 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.114:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:08:51.201083 containerd[1443]: time="2025-07-07T06:08:51.200896384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"657dfbb348683bd709b6cb8cde80baec689840a26ed6f11dd1751a814d409b57\"" Jul 7 06:08:51.202463 kubelet[2121]: E0707 06:08:51.202431 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:51.205097 containerd[1443]: time="2025-07-07T06:08:51.205065839Z" level=info msg="CreateContainer within sandbox \"657dfbb348683bd709b6cb8cde80baec689840a26ed6f11dd1751a814d409b57\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 06:08:51.206534 containerd[1443]: time="2025-07-07T06:08:51.206507945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3a73783a1144032fc828b26ecc7f9af75f8111012f37d59572d73dbb74f84a5\"" Jul 7 06:08:51.207360 kubelet[2121]: E0707 06:08:51.207298 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:51.208742 containerd[1443]: time="2025-07-07T06:08:51.208640990Z" level=info msg="CreateContainer within sandbox \"b3a73783a1144032fc828b26ecc7f9af75f8111012f37d59572d73dbb74f84a5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 06:08:51.213036 containerd[1443]: time="2025-07-07T06:08:51.213002511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7994124231d952a7d6c6066ce5376cad,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e44cc71b7947f979ebc0e1435f3d64cb72c6c133017d623b777374bfb966526\"" Jul 7 06:08:51.213665 kubelet[2121]: E0707 06:08:51.213643 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:51.215346 containerd[1443]: time="2025-07-07T06:08:51.215301592Z" level=info msg="CreateContainer within sandbox \"2e44cc71b7947f979ebc0e1435f3d64cb72c6c133017d623b777374bfb966526\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 06:08:51.218555 containerd[1443]: time="2025-07-07T06:08:51.218516076Z" level=info msg="CreateContainer within sandbox \"657dfbb348683bd709b6cb8cde80baec689840a26ed6f11dd1751a814d409b57\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c725c3c9cc615d73ce0068b007ccca1de280f977837c9e19113bf23dd442b3be\"" Jul 7 06:08:51.219100 containerd[1443]: time="2025-07-07T06:08:51.219073816Z" level=info msg="StartContainer for \"c725c3c9cc615d73ce0068b007ccca1de280f977837c9e19113bf23dd442b3be\"" Jul 7 06:08:51.222066 containerd[1443]: time="2025-07-07T06:08:51.221935803Z" level=info msg="CreateContainer within sandbox \"b3a73783a1144032fc828b26ecc7f9af75f8111012f37d59572d73dbb74f84a5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4cc4427ff283768f2c864df0f73e698d919f0f6e9dd54fb89a78474623eed238\"" Jul 7 06:08:51.222615 containerd[1443]: time="2025-07-07T06:08:51.222590778Z" level=info msg="StartContainer for \"4cc4427ff283768f2c864df0f73e698d919f0f6e9dd54fb89a78474623eed238\"" Jul 7 06:08:51.232655 containerd[1443]: time="2025-07-07T06:08:51.232469109Z" level=info msg="CreateContainer within sandbox \"2e44cc71b7947f979ebc0e1435f3d64cb72c6c133017d623b777374bfb966526\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"aec748e582978eca66ece73ee014328f34aa918e8f0e924c8a2844f3ed02e31d\"" Jul 7 06:08:51.233465 containerd[1443]: time="2025-07-07T06:08:51.233338257Z" level=info msg="StartContainer for \"aec748e582978eca66ece73ee014328f34aa918e8f0e924c8a2844f3ed02e31d\"" Jul 7 06:08:51.234376 kubelet[2121]: W0707 06:08:51.234327 2121 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Jul 7 06:08:51.234673 kubelet[2121]: E0707 06:08:51.234389 2121 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.114:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:08:51.248131 systemd[1]: Started cri-containerd-4cc4427ff283768f2c864df0f73e698d919f0f6e9dd54fb89a78474623eed238.scope - libcontainer container 4cc4427ff283768f2c864df0f73e698d919f0f6e9dd54fb89a78474623eed238. Jul 7 06:08:51.250801 systemd[1]: Started cri-containerd-c725c3c9cc615d73ce0068b007ccca1de280f977837c9e19113bf23dd442b3be.scope - libcontainer container c725c3c9cc615d73ce0068b007ccca1de280f977837c9e19113bf23dd442b3be. Jul 7 06:08:51.261182 systemd[1]: Started cri-containerd-aec748e582978eca66ece73ee014328f34aa918e8f0e924c8a2844f3ed02e31d.scope - libcontainer container aec748e582978eca66ece73ee014328f34aa918e8f0e924c8a2844f3ed02e31d. Jul 7 06:08:51.287388 containerd[1443]: time="2025-07-07T06:08:51.287342929Z" level=info msg="StartContainer for \"4cc4427ff283768f2c864df0f73e698d919f0f6e9dd54fb89a78474623eed238\" returns successfully" Jul 7 06:08:51.296397 kubelet[2121]: E0707 06:08:51.296343 2121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.114:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.114:6443: connect: connection refused" interval="1.6s" Jul 7 06:08:51.298805 containerd[1443]: time="2025-07-07T06:08:51.298752031Z" level=info msg="StartContainer for \"aec748e582978eca66ece73ee014328f34aa918e8f0e924c8a2844f3ed02e31d\" returns successfully" Jul 7 06:08:51.298901 containerd[1443]: time="2025-07-07T06:08:51.298876458Z" level=info msg="StartContainer for \"c725c3c9cc615d73ce0068b007ccca1de280f977837c9e19113bf23dd442b3be\" returns successfully" Jul 7 06:08:51.449719 kubelet[2121]: W0707 06:08:51.449640 2121 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.114:6443: connect: connection refused Jul 7 06:08:51.449719 kubelet[2121]: E0707 06:08:51.449722 2121 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.114:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.114:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:08:51.532580 kubelet[2121]: I0707 06:08:51.532438 2121 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:08:51.915806 kubelet[2121]: E0707 06:08:51.915693 2121 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:08:51.915914 kubelet[2121]: E0707 06:08:51.915823 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:51.919355 kubelet[2121]: E0707 06:08:51.919328 2121 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:08:51.919460 kubelet[2121]: E0707 06:08:51.919441 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:51.922021 kubelet[2121]: E0707 06:08:51.921983 2121 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:08:51.922133 kubelet[2121]: E0707 06:08:51.922116 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:52.926973 kubelet[2121]: E0707 06:08:52.924189 2121 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:08:52.926973 kubelet[2121]: E0707 06:08:52.924319 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:52.926973 kubelet[2121]: E0707 06:08:52.925501 2121 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:08:52.926973 kubelet[2121]: E0707 06:08:52.925591 2121 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:52.964361 kubelet[2121]: E0707 06:08:52.964301 2121 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 7 06:08:53.146563 kubelet[2121]: I0707 06:08:53.146524 2121 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 7 06:08:53.146925 kubelet[2121]: E0707 06:08:53.146732 2121 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 7 06:08:53.158897 kubelet[2121]: E0707 06:08:53.158834 2121 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:08:53.259422 kubelet[2121]: E0707 06:08:53.259287 2121 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:08:53.394900 kubelet[2121]: I0707 06:08:53.394793 2121 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 06:08:53.401879 kubelet[2121]: E0707 06:08:53.401645 2121 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 7 06:08:53.401879 kubelet[2121]: I0707 06:08:53.401673 2121 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:08:53.403309 kubelet[2121]: E0707 06:08:53.403287 2121 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:08:53.403309 kubelet[2121]: I0707 06:08:53.403311 2121 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 06:08:53.404762 kubelet[2121]: E0707 06:08:53.404717 2121 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 7 06:08:53.879518 kubelet[2121]: I0707 06:08:53.879482 2121 apiserver.go:52] "Watching apiserver" Jul 7 06:08:53.893020 kubelet[2121]: I0707 06:08:53.892981 2121 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 06:08:55.134783 systemd[1]: Reloading requested from client PID 2404 ('systemctl') (unit session-7.scope)... Jul 7 06:08:55.134799 systemd[1]: Reloading... Jul 7 06:08:55.207012 zram_generator::config[2449]: No configuration found. Jul 7 06:08:55.287257 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:08:55.353481 systemd[1]: Reloading finished in 218 ms. Jul 7 06:08:55.390830 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:08:55.412328 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 06:08:55.412577 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:08:55.412624 systemd[1]: kubelet.service: Consumed 1.079s CPU time, 130.4M memory peak, 0B memory swap peak. Jul 7 06:08:55.424244 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:08:55.529648 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:08:55.533718 (kubelet)[2485]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:08:55.573236 kubelet[2485]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:08:55.573236 kubelet[2485]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 06:08:55.573236 kubelet[2485]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:08:55.573585 kubelet[2485]: I0707 06:08:55.573329 2485 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:08:55.579918 kubelet[2485]: I0707 06:08:55.579876 2485 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 06:08:55.579918 kubelet[2485]: I0707 06:08:55.579908 2485 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:08:55.580210 kubelet[2485]: I0707 06:08:55.580179 2485 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 06:08:55.581396 kubelet[2485]: I0707 06:08:55.581373 2485 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 06:08:55.583740 kubelet[2485]: I0707 06:08:55.583589 2485 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:08:55.586470 kubelet[2485]: E0707 06:08:55.586433 2485 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 06:08:55.586470 kubelet[2485]: I0707 06:08:55.586462 2485 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 06:08:55.590375 kubelet[2485]: I0707 06:08:55.589755 2485 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:08:55.590375 kubelet[2485]: I0707 06:08:55.589937 2485 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:08:55.590375 kubelet[2485]: I0707 06:08:55.589957 2485 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:08:55.590375 kubelet[2485]: I0707 06:08:55.590218 2485 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:08:55.590571 kubelet[2485]: I0707 06:08:55.590227 2485 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 06:08:55.590571 kubelet[2485]: I0707 06:08:55.590274 2485 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:08:55.590722 kubelet[2485]: I0707 06:08:55.590703 2485 kubelet.go:446] "Attempting to sync node with API server" Jul 7 06:08:55.590788 kubelet[2485]: I0707 06:08:55.590777 2485 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:08:55.590851 kubelet[2485]: I0707 06:08:55.590842 2485 kubelet.go:352] "Adding apiserver pod source" Jul 7 06:08:55.590906 kubelet[2485]: I0707 06:08:55.590896 2485 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:08:55.592071 kubelet[2485]: I0707 06:08:55.592042 2485 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 06:08:55.595487 kubelet[2485]: I0707 06:08:55.595376 2485 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 06:08:55.595953 kubelet[2485]: I0707 06:08:55.595788 2485 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 06:08:55.595953 kubelet[2485]: I0707 06:08:55.595820 2485 server.go:1287] "Started kubelet" Jul 7 06:08:55.596584 kubelet[2485]: I0707 06:08:55.596458 2485 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:08:55.596773 kubelet[2485]: I0707 06:08:55.596735 2485 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:08:55.596812 kubelet[2485]: I0707 06:08:55.596797 2485 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:08:55.596991 kubelet[2485]: I0707 06:08:55.596958 2485 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:08:55.597737 kubelet[2485]: I0707 06:08:55.597706 2485 server.go:479] "Adding debug handlers to kubelet server" Jul 7 06:08:55.598737 kubelet[2485]: I0707 06:08:55.598700 2485 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:08:55.600027 kubelet[2485]: E0707 06:08:55.599898 2485 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:08:55.600027 kubelet[2485]: I0707 06:08:55.599933 2485 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 06:08:55.600316 kubelet[2485]: I0707 06:08:55.600112 2485 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 06:08:55.600316 kubelet[2485]: I0707 06:08:55.600239 2485 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:08:55.602799 kubelet[2485]: I0707 06:08:55.602768 2485 factory.go:221] Registration of the systemd container factory successfully Jul 7 06:08:55.602888 kubelet[2485]: I0707 06:08:55.602865 2485 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:08:55.606995 kubelet[2485]: I0707 06:08:55.603678 2485 factory.go:221] Registration of the containerd container factory successfully Jul 7 06:08:55.622398 kubelet[2485]: I0707 06:08:55.621926 2485 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 06:08:55.623705 kubelet[2485]: I0707 06:08:55.623483 2485 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 06:08:55.623705 kubelet[2485]: I0707 06:08:55.623506 2485 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 06:08:55.623705 kubelet[2485]: I0707 06:08:55.623534 2485 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 06:08:55.623705 kubelet[2485]: I0707 06:08:55.623542 2485 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 06:08:55.623705 kubelet[2485]: E0707 06:08:55.623579 2485 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:08:55.645717 kubelet[2485]: I0707 06:08:55.645635 2485 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 06:08:55.647332 kubelet[2485]: I0707 06:08:55.646842 2485 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 06:08:55.647332 kubelet[2485]: I0707 06:08:55.646877 2485 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:08:55.647332 kubelet[2485]: I0707 06:08:55.647041 2485 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 06:08:55.647332 kubelet[2485]: I0707 06:08:55.647054 2485 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 06:08:55.647332 kubelet[2485]: I0707 06:08:55.647071 2485 policy_none.go:49] "None policy: Start" Jul 7 06:08:55.647332 kubelet[2485]: I0707 06:08:55.647079 2485 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 06:08:55.647332 kubelet[2485]: I0707 06:08:55.647088 2485 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:08:55.647332 kubelet[2485]: I0707 06:08:55.647180 2485 state_mem.go:75] "Updated machine memory state" Jul 7 06:08:55.651124 kubelet[2485]: I0707 06:08:55.651101 2485 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 06:08:55.651956 kubelet[2485]: I0707 06:08:55.651935 2485 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:08:55.652425 kubelet[2485]: I0707 06:08:55.652390 2485 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:08:55.652865 kubelet[2485]: I0707 06:08:55.652846 2485 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:08:55.654631 kubelet[2485]: E0707 06:08:55.654594 2485 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 06:08:55.725692 kubelet[2485]: I0707 06:08:55.725324 2485 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 06:08:55.725692 kubelet[2485]: I0707 06:08:55.725427 2485 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 06:08:55.725692 kubelet[2485]: I0707 06:08:55.725602 2485 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:08:55.758242 kubelet[2485]: I0707 06:08:55.758165 2485 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:08:55.765363 kubelet[2485]: I0707 06:08:55.765319 2485 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 7 06:08:55.765700 kubelet[2485]: I0707 06:08:55.765517 2485 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 7 06:08:55.901858 kubelet[2485]: I0707 06:08:55.901752 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:08:55.902279 kubelet[2485]: I0707 06:08:55.902018 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 7 06:08:55.902279 kubelet[2485]: I0707 06:08:55.902046 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7994124231d952a7d6c6066ce5376cad-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7994124231d952a7d6c6066ce5376cad\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:08:55.902279 kubelet[2485]: I0707 06:08:55.902063 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:08:55.902279 kubelet[2485]: I0707 06:08:55.902083 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:08:55.902279 kubelet[2485]: I0707 06:08:55.902112 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:08:55.902404 kubelet[2485]: I0707 06:08:55.902131 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:08:55.902404 kubelet[2485]: I0707 06:08:55.902155 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7994124231d952a7d6c6066ce5376cad-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7994124231d952a7d6c6066ce5376cad\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:08:55.902404 kubelet[2485]: I0707 06:08:55.902173 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7994124231d952a7d6c6066ce5376cad-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7994124231d952a7d6c6066ce5376cad\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:08:56.030240 kubelet[2485]: E0707 06:08:56.030193 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:56.031150 kubelet[2485]: E0707 06:08:56.031113 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:56.031277 kubelet[2485]: E0707 06:08:56.031163 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:56.591508 kubelet[2485]: I0707 06:08:56.591434 2485 apiserver.go:52] "Watching apiserver" Jul 7 06:08:56.600552 kubelet[2485]: I0707 06:08:56.600513 2485 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 06:08:56.633173 kubelet[2485]: I0707 06:08:56.633060 2485 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 06:08:56.633173 kubelet[2485]: E0707 06:08:56.633114 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:56.633393 kubelet[2485]: I0707 06:08:56.633303 2485 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:08:56.644066 kubelet[2485]: E0707 06:08:56.644030 2485 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 7 06:08:56.644176 kubelet[2485]: E0707 06:08:56.644072 2485 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:08:56.644199 kubelet[2485]: E0707 06:08:56.644188 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:56.644223 kubelet[2485]: E0707 06:08:56.644205 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:56.664005 kubelet[2485]: I0707 06:08:56.663702 2485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.663683916 podStartE2EDuration="1.663683916s" podCreationTimestamp="2025-07-07 06:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:08:56.652528381 +0000 UTC m=+1.115383117" watchObservedRunningTime="2025-07-07 06:08:56.663683916 +0000 UTC m=+1.126538612" Jul 7 06:08:56.676806 kubelet[2485]: I0707 06:08:56.676650 2485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.676633472 podStartE2EDuration="1.676633472s" podCreationTimestamp="2025-07-07 06:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:08:56.676460792 +0000 UTC m=+1.139315528" watchObservedRunningTime="2025-07-07 06:08:56.676633472 +0000 UTC m=+1.139488208" Jul 7 06:08:56.676806 kubelet[2485]: I0707 06:08:56.676725 2485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.676722275 podStartE2EDuration="1.676722275s" podCreationTimestamp="2025-07-07 06:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:08:56.664762675 +0000 UTC m=+1.127617411" watchObservedRunningTime="2025-07-07 06:08:56.676722275 +0000 UTC m=+1.139577011" Jul 7 06:08:57.634903 kubelet[2485]: E0707 06:08:57.634813 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:57.635257 kubelet[2485]: E0707 06:08:57.634988 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:57.635257 kubelet[2485]: E0707 06:08:57.635080 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:08:58.635958 kubelet[2485]: E0707 06:08:58.635929 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:09:00.370187 kubelet[2485]: I0707 06:09:00.370158 2485 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 06:09:00.371084 containerd[1443]: time="2025-07-07T06:09:00.370935314Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 06:09:00.371408 kubelet[2485]: I0707 06:09:00.371125 2485 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 06:09:01.023810 systemd[1]: Created slice kubepods-besteffort-pode8528d3b_d944_42cf_be15_f1e0b4e1c29a.slice - libcontainer container kubepods-besteffort-pode8528d3b_d944_42cf_be15_f1e0b4e1c29a.slice. Jul 7 06:09:01.039187 kubelet[2485]: I0707 06:09:01.039151 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8528d3b-d944-42cf-be15-f1e0b4e1c29a-xtables-lock\") pod \"kube-proxy-k8lh6\" (UID: \"e8528d3b-d944-42cf-be15-f1e0b4e1c29a\") " pod="kube-system/kube-proxy-k8lh6" Jul 7 06:09:01.039187 kubelet[2485]: I0707 06:09:01.039228 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8bt6\" (UniqueName: \"kubernetes.io/projected/e8528d3b-d944-42cf-be15-f1e0b4e1c29a-kube-api-access-s8bt6\") pod \"kube-proxy-k8lh6\" (UID: \"e8528d3b-d944-42cf-be15-f1e0b4e1c29a\") " pod="kube-system/kube-proxy-k8lh6" Jul 7 06:09:01.039187 kubelet[2485]: I0707 06:09:01.039252 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e8528d3b-d944-42cf-be15-f1e0b4e1c29a-kube-proxy\") pod \"kube-proxy-k8lh6\" (UID: \"e8528d3b-d944-42cf-be15-f1e0b4e1c29a\") " pod="kube-system/kube-proxy-k8lh6" Jul 7 06:09:01.039187 kubelet[2485]: I0707 06:09:01.039271 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8528d3b-d944-42cf-be15-f1e0b4e1c29a-lib-modules\") pod \"kube-proxy-k8lh6\" (UID: \"e8528d3b-d944-42cf-be15-f1e0b4e1c29a\") " pod="kube-system/kube-proxy-k8lh6" Jul 7 06:09:01.147424 kubelet[2485]: E0707 06:09:01.147380 2485 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 7 06:09:01.147424 kubelet[2485]: E0707 06:09:01.147414 2485 projected.go:194] Error preparing data for projected volume kube-api-access-s8bt6 for pod kube-system/kube-proxy-k8lh6: configmap "kube-root-ca.crt" not found Jul 7 06:09:01.147570 kubelet[2485]: E0707 06:09:01.147468 2485 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e8528d3b-d944-42cf-be15-f1e0b4e1c29a-kube-api-access-s8bt6 podName:e8528d3b-d944-42cf-be15-f1e0b4e1c29a nodeName:}" failed. No retries permitted until 2025-07-07 06:09:01.647448955 +0000 UTC m=+6.110303651 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s8bt6" (UniqueName: "kubernetes.io/projected/e8528d3b-d944-42cf-be15-f1e0b4e1c29a-kube-api-access-s8bt6") pod "kube-proxy-k8lh6" (UID: "e8528d3b-d944-42cf-be15-f1e0b4e1c29a") : configmap "kube-root-ca.crt" not found Jul 7 06:09:01.447129 systemd[1]: Created slice kubepods-besteffort-pod34108dd2_9901_40e1_8ea3_39def3b0c901.slice - libcontainer container kubepods-besteffort-pod34108dd2_9901_40e1_8ea3_39def3b0c901.slice. Jul 7 06:09:01.543148 kubelet[2485]: I0707 06:09:01.543080 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c47kb\" (UniqueName: \"kubernetes.io/projected/34108dd2-9901-40e1-8ea3-39def3b0c901-kube-api-access-c47kb\") pod \"tigera-operator-747864d56d-2zh99\" (UID: \"34108dd2-9901-40e1-8ea3-39def3b0c901\") " pod="tigera-operator/tigera-operator-747864d56d-2zh99" Jul 7 06:09:01.543148 kubelet[2485]: I0707 06:09:01.543126 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/34108dd2-9901-40e1-8ea3-39def3b0c901-var-lib-calico\") pod \"tigera-operator-747864d56d-2zh99\" (UID: \"34108dd2-9901-40e1-8ea3-39def3b0c901\") " pod="tigera-operator/tigera-operator-747864d56d-2zh99" Jul 7 06:09:01.751364 containerd[1443]: time="2025-07-07T06:09:01.751220578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-2zh99,Uid:34108dd2-9901-40e1-8ea3-39def3b0c901,Namespace:tigera-operator,Attempt:0,}" Jul 7 06:09:01.768463 containerd[1443]: time="2025-07-07T06:09:01.768378699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:09:01.768463 containerd[1443]: time="2025-07-07T06:09:01.768435779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:09:01.768463 containerd[1443]: time="2025-07-07T06:09:01.768448188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:01.769386 containerd[1443]: time="2025-07-07T06:09:01.769317556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:01.792149 systemd[1]: Started cri-containerd-17fbd7e65879bc0d4501a88edbfd4763ad97332609f53356b8902298645f94b0.scope - libcontainer container 17fbd7e65879bc0d4501a88edbfd4763ad97332609f53356b8902298645f94b0. Jul 7 06:09:01.817330 containerd[1443]: time="2025-07-07T06:09:01.817285667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-2zh99,Uid:34108dd2-9901-40e1-8ea3-39def3b0c901,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"17fbd7e65879bc0d4501a88edbfd4763ad97332609f53356b8902298645f94b0\"" Jul 7 06:09:01.819351 containerd[1443]: time="2025-07-07T06:09:01.819242395Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 06:09:01.932601 kubelet[2485]: E0707 06:09:01.932566 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:09:01.933822 containerd[1443]: time="2025-07-07T06:09:01.933167039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k8lh6,Uid:e8528d3b-d944-42cf-be15-f1e0b4e1c29a,Namespace:kube-system,Attempt:0,}" Jul 7 06:09:01.954226 containerd[1443]: time="2025-07-07T06:09:01.954125058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:09:01.954226 containerd[1443]: time="2025-07-07T06:09:01.954170089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:09:01.954226 containerd[1443]: time="2025-07-07T06:09:01.954181097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:01.954416 containerd[1443]: time="2025-07-07T06:09:01.954255229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:01.975161 systemd[1]: Started cri-containerd-f468b8c5c1dd867f7e9f7a52b6dd37e8eb609e6946510544b184b3c0120fc2fb.scope - libcontainer container f468b8c5c1dd867f7e9f7a52b6dd37e8eb609e6946510544b184b3c0120fc2fb. Jul 7 06:09:01.992102 containerd[1443]: time="2025-07-07T06:09:01.992060872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k8lh6,Uid:e8528d3b-d944-42cf-be15-f1e0b4e1c29a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f468b8c5c1dd867f7e9f7a52b6dd37e8eb609e6946510544b184b3c0120fc2fb\"" Jul 7 06:09:01.992791 kubelet[2485]: E0707 06:09:01.992764 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:09:01.995402 containerd[1443]: time="2025-07-07T06:09:01.995370267Z" level=info msg="CreateContainer within sandbox \"f468b8c5c1dd867f7e9f7a52b6dd37e8eb609e6946510544b184b3c0120fc2fb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 06:09:02.007580 containerd[1443]: time="2025-07-07T06:09:02.007481215Z" level=info msg="CreateContainer within sandbox \"f468b8c5c1dd867f7e9f7a52b6dd37e8eb609e6946510544b184b3c0120fc2fb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"052de16094bb250dc0f2c95d13f9c7e6f569ccc3a549381ccdc0c28deace8d9b\"" Jul 7 06:09:02.008032 containerd[1443]: time="2025-07-07T06:09:02.007999038Z" level=info msg="StartContainer for \"052de16094bb250dc0f2c95d13f9c7e6f569ccc3a549381ccdc0c28deace8d9b\"" Jul 7 06:09:02.035144 systemd[1]: Started cri-containerd-052de16094bb250dc0f2c95d13f9c7e6f569ccc3a549381ccdc0c28deace8d9b.scope - libcontainer container 052de16094bb250dc0f2c95d13f9c7e6f569ccc3a549381ccdc0c28deace8d9b. Jul 7 06:09:02.056183 containerd[1443]: time="2025-07-07T06:09:02.055774754Z" level=info msg="StartContainer for \"052de16094bb250dc0f2c95d13f9c7e6f569ccc3a549381ccdc0c28deace8d9b\" returns successfully" Jul 7 06:09:02.643748 kubelet[2485]: E0707 06:09:02.643694 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:09:02.652427 kubelet[2485]: I0707 06:09:02.652360 2485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k8lh6" podStartSLOduration=1.6523442739999998 podStartE2EDuration="1.652344274s" podCreationTimestamp="2025-07-07 06:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:09:02.652148865 +0000 UTC m=+7.115003641" watchObservedRunningTime="2025-07-07 06:09:02.652344274 +0000 UTC m=+7.115199010" Jul 7 06:09:03.051045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1003090002.mount: Deactivated successfully. Jul 7 06:09:03.328775 containerd[1443]: time="2025-07-07T06:09:03.328666079Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:03.329753 containerd[1443]: time="2025-07-07T06:09:03.329535584Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 7 06:09:03.330522 containerd[1443]: time="2025-07-07T06:09:03.330455602Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:03.332725 containerd[1443]: time="2025-07-07T06:09:03.332688082Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:03.333731 containerd[1443]: time="2025-07-07T06:09:03.333671939Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.514311982s" Jul 7 06:09:03.333792 containerd[1443]: time="2025-07-07T06:09:03.333738941Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 7 06:09:03.336129 containerd[1443]: time="2025-07-07T06:09:03.336098421Z" level=info msg="CreateContainer within sandbox \"17fbd7e65879bc0d4501a88edbfd4763ad97332609f53356b8902298645f94b0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 06:09:03.356674 containerd[1443]: time="2025-07-07T06:09:03.356638865Z" level=info msg="CreateContainer within sandbox \"17fbd7e65879bc0d4501a88edbfd4763ad97332609f53356b8902298645f94b0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1fed2310ddd3a824c0cc2e9391c5aa8bab5a345ca18dbee0abb7b478c536924e\"" Jul 7 06:09:03.357468 containerd[1443]: time="2025-07-07T06:09:03.357438767Z" level=info msg="StartContainer for \"1fed2310ddd3a824c0cc2e9391c5aa8bab5a345ca18dbee0abb7b478c536924e\"" Jul 7 06:09:03.383107 systemd[1]: Started cri-containerd-1fed2310ddd3a824c0cc2e9391c5aa8bab5a345ca18dbee0abb7b478c536924e.scope - libcontainer container 1fed2310ddd3a824c0cc2e9391c5aa8bab5a345ca18dbee0abb7b478c536924e. Jul 7 06:09:03.403479 containerd[1443]: time="2025-07-07T06:09:03.403432377Z" level=info msg="StartContainer for \"1fed2310ddd3a824c0cc2e9391c5aa8bab5a345ca18dbee0abb7b478c536924e\" returns successfully" Jul 7 06:09:03.656881 kubelet[2485]: I0707 06:09:03.656441 2485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-2zh99" podStartSLOduration=1.140803748 podStartE2EDuration="2.656424908s" podCreationTimestamp="2025-07-07 06:09:01 +0000 UTC" firstStartedPulling="2025-07-07 06:09:01.818838113 +0000 UTC m=+6.281692809" lastFinishedPulling="2025-07-07 06:09:03.334459233 +0000 UTC m=+7.797313969" observedRunningTime="2025-07-07 06:09:03.65618884 +0000 UTC m=+8.119043656" watchObservedRunningTime="2025-07-07 06:09:03.656424908 +0000 UTC m=+8.119279644" Jul 7 06:09:05.436725 systemd[1]: cri-containerd-1fed2310ddd3a824c0cc2e9391c5aa8bab5a345ca18dbee0abb7b478c536924e.scope: Deactivated successfully. Jul 7 06:09:05.462861 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1fed2310ddd3a824c0cc2e9391c5aa8bab5a345ca18dbee0abb7b478c536924e-rootfs.mount: Deactivated successfully. Jul 7 06:09:05.561199 kubelet[2485]: E0707 06:09:05.560858 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:09:05.565852 containerd[1443]: time="2025-07-07T06:09:05.557721762Z" level=info msg="shim disconnected" id=1fed2310ddd3a824c0cc2e9391c5aa8bab5a345ca18dbee0abb7b478c536924e namespace=k8s.io Jul 7 06:09:05.565852 containerd[1443]: time="2025-07-07T06:09:05.565656955Z" level=warning msg="cleaning up after shim disconnected" id=1fed2310ddd3a824c0cc2e9391c5aa8bab5a345ca18dbee0abb7b478c536924e namespace=k8s.io Jul 7 06:09:05.565852 containerd[1443]: time="2025-07-07T06:09:05.565674245Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:09:05.673990 kubelet[2485]: E0707 06:09:05.673946 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:09:05.674213 kubelet[2485]: I0707 06:09:05.674174 2485 scope.go:117] "RemoveContainer" containerID="1fed2310ddd3a824c0cc2e9391c5aa8bab5a345ca18dbee0abb7b478c536924e" Jul 7 06:09:05.680778 containerd[1443]: time="2025-07-07T06:09:05.680726232Z" level=info msg="CreateContainer within sandbox \"17fbd7e65879bc0d4501a88edbfd4763ad97332609f53356b8902298645f94b0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 7 06:09:05.702211 containerd[1443]: time="2025-07-07T06:09:05.701758770Z" level=info msg="CreateContainer within sandbox \"17fbd7e65879bc0d4501a88edbfd4763ad97332609f53356b8902298645f94b0\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"8f340e751503e17ba5b83870e257b1cd610f8f505607cfbd883bc30edf71dcd4\"" Jul 7 06:09:05.702394 containerd[1443]: time="2025-07-07T06:09:05.702361110Z" level=info msg="StartContainer for \"8f340e751503e17ba5b83870e257b1cd610f8f505607cfbd883bc30edf71dcd4\"" Jul 7 06:09:05.746385 systemd[1]: Started cri-containerd-8f340e751503e17ba5b83870e257b1cd610f8f505607cfbd883bc30edf71dcd4.scope - libcontainer container 8f340e751503e17ba5b83870e257b1cd610f8f505607cfbd883bc30edf71dcd4. Jul 7 06:09:05.808617 containerd[1443]: time="2025-07-07T06:09:05.808549539Z" level=info msg="StartContainer for \"8f340e751503e17ba5b83870e257b1cd610f8f505607cfbd883bc30edf71dcd4\" returns successfully" Jul 7 06:09:06.083811 kubelet[2485]: E0707 06:09:06.083706 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:09:06.678493 kubelet[2485]: E0707 06:09:06.678457 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:09:07.593915 kubelet[2485]: E0707 06:09:07.593875 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:09:07.678053 kubelet[2485]: E0707 06:09:07.677978 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:09:08.114098 update_engine[1429]: I20250707 06:09:08.114043 1429 update_attempter.cc:509] Updating boot flags... Jul 7 06:09:08.138285 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2936) Jul 7 06:09:08.181002 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2938) Jul 7 06:09:08.650255 sudo[1622]: pam_unix(sudo:session): session closed for user root Jul 7 06:09:08.654815 sshd[1619]: pam_unix(sshd:session): session closed for user core Jul 7 06:09:08.657518 systemd[1]: sshd@6-10.0.0.114:22-10.0.0.1:33768.service: Deactivated successfully. Jul 7 06:09:08.659344 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 06:09:08.659551 systemd[1]: session-7.scope: Consumed 8.524s CPU time, 150.8M memory peak, 0B memory swap peak. Jul 7 06:09:08.660991 systemd-logind[1425]: Session 7 logged out. Waiting for processes to exit. Jul 7 06:09:08.662020 systemd-logind[1425]: Removed session 7. Jul 7 06:09:14.310216 systemd[1]: Created slice kubepods-besteffort-pod62770ff4_f81b_4178_83d8_fe74e44556f4.slice - libcontainer container kubepods-besteffort-pod62770ff4_f81b_4178_83d8_fe74e44556f4.slice. Jul 7 06:09:14.349159 kubelet[2485]: I0707 06:09:14.349111 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62770ff4-f81b-4178-83d8-fe74e44556f4-tigera-ca-bundle\") pod \"calico-typha-5d794dff9b-fdhfd\" (UID: \"62770ff4-f81b-4178-83d8-fe74e44556f4\") " pod="calico-system/calico-typha-5d794dff9b-fdhfd" Jul 7 06:09:14.349159 kubelet[2485]: I0707 06:09:14.349158 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/62770ff4-f81b-4178-83d8-fe74e44556f4-typha-certs\") pod \"calico-typha-5d794dff9b-fdhfd\" (UID: \"62770ff4-f81b-4178-83d8-fe74e44556f4\") " pod="calico-system/calico-typha-5d794dff9b-fdhfd" Jul 7 06:09:14.349532 kubelet[2485]: I0707 06:09:14.349179 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7mq2\" (UniqueName: \"kubernetes.io/projected/62770ff4-f81b-4178-83d8-fe74e44556f4-kube-api-access-t7mq2\") pod \"calico-typha-5d794dff9b-fdhfd\" (UID: \"62770ff4-f81b-4178-83d8-fe74e44556f4\") " pod="calico-system/calico-typha-5d794dff9b-fdhfd" Jul 7 06:09:14.534906 systemd[1]: Created slice kubepods-besteffort-podcaed9238_72d5_48e5_973d_845d4dff0759.slice - libcontainer container kubepods-besteffort-podcaed9238_72d5_48e5_973d_845d4dff0759.slice. Jul 7 06:09:14.616261 kubelet[2485]: E0707 06:09:14.616147 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:09:14.619258 containerd[1443]: time="2025-07-07T06:09:14.619213189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d794dff9b-fdhfd,Uid:62770ff4-f81b-4178-83d8-fe74e44556f4,Namespace:calico-system,Attempt:0,}" Jul 7 06:09:14.640096 containerd[1443]: time="2025-07-07T06:09:14.640011094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:09:14.640096 containerd[1443]: time="2025-07-07T06:09:14.640071676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:09:14.640096 containerd[1443]: time="2025-07-07T06:09:14.640087121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:14.640372 containerd[1443]: time="2025-07-07T06:09:14.640211086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:14.651120 kubelet[2485]: I0707 06:09:14.651079 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/caed9238-72d5-48e5-973d-845d4dff0759-cni-log-dir\") pod \"calico-node-l2zc6\" (UID: \"caed9238-72d5-48e5-973d-845d4dff0759\") " pod="calico-system/calico-node-l2zc6" Jul 7 06:09:14.651120 kubelet[2485]: I0707 06:09:14.651122 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/caed9238-72d5-48e5-973d-845d4dff0759-policysync\") pod \"calico-node-l2zc6\" (UID: \"caed9238-72d5-48e5-973d-845d4dff0759\") " pod="calico-system/calico-node-l2zc6" Jul 7 06:09:14.651233 kubelet[2485]: I0707 06:09:14.651140 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/caed9238-72d5-48e5-973d-845d4dff0759-cni-net-dir\") pod \"calico-node-l2zc6\" (UID: \"caed9238-72d5-48e5-973d-845d4dff0759\") " pod="calico-system/calico-node-l2zc6" Jul 7 06:09:14.651233 kubelet[2485]: I0707 06:09:14.651162 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/caed9238-72d5-48e5-973d-845d4dff0759-node-certs\") pod \"calico-node-l2zc6\" (UID: \"caed9238-72d5-48e5-973d-845d4dff0759\") " pod="calico-system/calico-node-l2zc6" Jul 7 06:09:14.651233 kubelet[2485]: I0707 06:09:14.651180 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/caed9238-72d5-48e5-973d-845d4dff0759-cni-bin-dir\") pod \"calico-node-l2zc6\" (UID: \"caed9238-72d5-48e5-973d-845d4dff0759\") " pod="calico-system/calico-node-l2zc6" Jul 7 06:09:14.651298 kubelet[2485]: I0707 06:09:14.651231 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/caed9238-72d5-48e5-973d-845d4dff0759-flexvol-driver-host\") pod \"calico-node-l2zc6\" (UID: \"caed9238-72d5-48e5-973d-845d4dff0759\") " pod="calico-system/calico-node-l2zc6" Jul 7 06:09:14.651298 kubelet[2485]: I0707 06:09:14.651279 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/caed9238-72d5-48e5-973d-845d4dff0759-xtables-lock\") pod \"calico-node-l2zc6\" (UID: \"caed9238-72d5-48e5-973d-845d4dff0759\") " pod="calico-system/calico-node-l2zc6" Jul 7 06:09:14.651348 kubelet[2485]: I0707 06:09:14.651325 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/caed9238-72d5-48e5-973d-845d4dff0759-lib-modules\") pod \"calico-node-l2zc6\" (UID: \"caed9238-72d5-48e5-973d-845d4dff0759\") " pod="calico-system/calico-node-l2zc6" Jul 7 06:09:14.651371 kubelet[2485]: I0707 06:09:14.651342 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/caed9238-72d5-48e5-973d-845d4dff0759-var-lib-calico\") pod \"calico-node-l2zc6\" (UID: \"caed9238-72d5-48e5-973d-845d4dff0759\") " pod="calico-system/calico-node-l2zc6" Jul 7 06:09:14.651371 kubelet[2485]: I0707 06:09:14.651360 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/caed9238-72d5-48e5-973d-845d4dff0759-var-run-calico\") pod \"calico-node-l2zc6\" (UID: \"caed9238-72d5-48e5-973d-845d4dff0759\") " pod="calico-system/calico-node-l2zc6" Jul 7 06:09:14.651740 kubelet[2485]: I0707 06:09:14.651487 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw5sx\" (UniqueName: \"kubernetes.io/projected/caed9238-72d5-48e5-973d-845d4dff0759-kube-api-access-kw5sx\") pod \"calico-node-l2zc6\" (UID: \"caed9238-72d5-48e5-973d-845d4dff0759\") " pod="calico-system/calico-node-l2zc6" Jul 7 06:09:14.651740 kubelet[2485]: I0707 06:09:14.651518 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/caed9238-72d5-48e5-973d-845d4dff0759-tigera-ca-bundle\") pod \"calico-node-l2zc6\" (UID: \"caed9238-72d5-48e5-973d-845d4dff0759\") " pod="calico-system/calico-node-l2zc6" Jul 7 06:09:14.661115 systemd[1]: Started cri-containerd-aed64b58719ac1d2d53aa02884ba878d457f982875ac4c04ceee80a2b2863ff3.scope - libcontainer container aed64b58719ac1d2d53aa02884ba878d457f982875ac4c04ceee80a2b2863ff3. Jul 7 06:09:14.696351 containerd[1443]: time="2025-07-07T06:09:14.696289002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d794dff9b-fdhfd,Uid:62770ff4-f81b-4178-83d8-fe74e44556f4,Namespace:calico-system,Attempt:0,} returns sandbox id \"aed64b58719ac1d2d53aa02884ba878d457f982875ac4c04ceee80a2b2863ff3\"" Jul 7 06:09:14.697027 kubelet[2485]: E0707 06:09:14.696986 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:09:14.698143 containerd[1443]: time="2025-07-07T06:09:14.698074206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 06:09:14.756491 kubelet[2485]: E0707 06:09:14.756451 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.756491 kubelet[2485]: W0707 06:09:14.756490 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.762406 kubelet[2485]: E0707 06:09:14.762213 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.764658 kubelet[2485]: E0707 06:09:14.764634 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.764658 kubelet[2485]: W0707 06:09:14.764655 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.764830 kubelet[2485]: E0707 06:09:14.764672 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.773409 kubelet[2485]: E0707 06:09:14.773383 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.773409 kubelet[2485]: W0707 06:09:14.773403 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.775071 kubelet[2485]: E0707 06:09:14.773424 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.793900 kubelet[2485]: E0707 06:09:14.793845 2485 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-txlwn" podUID="5705c009-0d57-436d-b155-b8ac4388465f" Jul 7 06:09:14.817176 kubelet[2485]: E0707 06:09:14.817091 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.817176 kubelet[2485]: W0707 06:09:14.817118 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.817176 kubelet[2485]: E0707 06:09:14.817139 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.817351 kubelet[2485]: E0707 06:09:14.817327 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.822419 kubelet[2485]: W0707 06:09:14.817339 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.822419 kubelet[2485]: E0707 06:09:14.822270 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.822558 kubelet[2485]: E0707 06:09:14.822547 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.822558 kubelet[2485]: W0707 06:09:14.822559 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.822685 kubelet[2485]: E0707 06:09:14.822571 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.822738 kubelet[2485]: E0707 06:09:14.822722 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.822738 kubelet[2485]: W0707 06:09:14.822731 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.822781 kubelet[2485]: E0707 06:09:14.822740 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.823049 kubelet[2485]: E0707 06:09:14.823018 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.823085 kubelet[2485]: W0707 06:09:14.823050 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.823085 kubelet[2485]: E0707 06:09:14.823062 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.823259 kubelet[2485]: E0707 06:09:14.823247 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.823259 kubelet[2485]: W0707 06:09:14.823258 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.823314 kubelet[2485]: E0707 06:09:14.823271 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.823453 kubelet[2485]: E0707 06:09:14.823428 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.823453 kubelet[2485]: W0707 06:09:14.823441 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.823453 kubelet[2485]: E0707 06:09:14.823450 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.823661 kubelet[2485]: E0707 06:09:14.823612 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.823661 kubelet[2485]: W0707 06:09:14.823619 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.823661 kubelet[2485]: E0707 06:09:14.823627 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.823884 kubelet[2485]: E0707 06:09:14.823819 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.823884 kubelet[2485]: W0707 06:09:14.823827 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.823884 kubelet[2485]: E0707 06:09:14.823835 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.824088 kubelet[2485]: E0707 06:09:14.823986 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.824088 kubelet[2485]: W0707 06:09:14.823995 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.824088 kubelet[2485]: E0707 06:09:14.824003 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.824207 kubelet[2485]: E0707 06:09:14.824140 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.824207 kubelet[2485]: W0707 06:09:14.824147 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.824207 kubelet[2485]: E0707 06:09:14.824154 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.824305 kubelet[2485]: E0707 06:09:14.824286 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.824305 kubelet[2485]: W0707 06:09:14.824292 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.824305 kubelet[2485]: E0707 06:09:14.824299 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.824447 kubelet[2485]: E0707 06:09:14.824435 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.824447 kubelet[2485]: W0707 06:09:14.824445 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.824513 kubelet[2485]: E0707 06:09:14.824453 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.824581 kubelet[2485]: E0707 06:09:14.824571 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.824581 kubelet[2485]: W0707 06:09:14.824580 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.824651 kubelet[2485]: E0707 06:09:14.824587 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.824718 kubelet[2485]: E0707 06:09:14.824706 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.824718 kubelet[2485]: W0707 06:09:14.824715 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.824787 kubelet[2485]: E0707 06:09:14.824723 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.824853 kubelet[2485]: E0707 06:09:14.824844 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.824853 kubelet[2485]: W0707 06:09:14.824853 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.824920 kubelet[2485]: E0707 06:09:14.824860 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.825028 kubelet[2485]: E0707 06:09:14.825017 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.825028 kubelet[2485]: W0707 06:09:14.825028 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.825103 kubelet[2485]: E0707 06:09:14.825038 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.825180 kubelet[2485]: E0707 06:09:14.825170 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.825180 kubelet[2485]: W0707 06:09:14.825179 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.825252 kubelet[2485]: E0707 06:09:14.825187 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.825324 kubelet[2485]: E0707 06:09:14.825315 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.825324 kubelet[2485]: W0707 06:09:14.825324 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.825402 kubelet[2485]: E0707 06:09:14.825332 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.825466 kubelet[2485]: E0707 06:09:14.825456 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.825466 kubelet[2485]: W0707 06:09:14.825465 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.825515 kubelet[2485]: E0707 06:09:14.825473 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.839068 containerd[1443]: time="2025-07-07T06:09:14.839023507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l2zc6,Uid:caed9238-72d5-48e5-973d-845d4dff0759,Namespace:calico-system,Attempt:0,}" Jul 7 06:09:14.853251 kubelet[2485]: E0707 06:09:14.853212 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.853251 kubelet[2485]: W0707 06:09:14.853237 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.853251 kubelet[2485]: E0707 06:09:14.853258 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.853553 kubelet[2485]: I0707 06:09:14.853288 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5705c009-0d57-436d-b155-b8ac4388465f-registration-dir\") pod \"csi-node-driver-txlwn\" (UID: \"5705c009-0d57-436d-b155-b8ac4388465f\") " pod="calico-system/csi-node-driver-txlwn" Jul 7 06:09:14.853687 kubelet[2485]: E0707 06:09:14.853655 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.853724 kubelet[2485]: W0707 06:09:14.853675 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.853724 kubelet[2485]: E0707 06:09:14.853717 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.854080 kubelet[2485]: I0707 06:09:14.853735 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5705c009-0d57-436d-b155-b8ac4388465f-socket-dir\") pod \"csi-node-driver-txlwn\" (UID: \"5705c009-0d57-436d-b155-b8ac4388465f\") " pod="calico-system/csi-node-driver-txlwn" Jul 7 06:09:14.854080 kubelet[2485]: E0707 06:09:14.854013 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.854080 kubelet[2485]: W0707 06:09:14.854030 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.854080 kubelet[2485]: E0707 06:09:14.854046 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.854262 kubelet[2485]: E0707 06:09:14.854242 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.854262 kubelet[2485]: W0707 06:09:14.854257 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.854454 kubelet[2485]: E0707 06:09:14.854274 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.854664 kubelet[2485]: E0707 06:09:14.854531 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.854664 kubelet[2485]: W0707 06:09:14.854545 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.854664 kubelet[2485]: E0707 06:09:14.854579 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.854664 kubelet[2485]: I0707 06:09:14.854599 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5705c009-0d57-436d-b155-b8ac4388465f-varrun\") pod \"csi-node-driver-txlwn\" (UID: \"5705c009-0d57-436d-b155-b8ac4388465f\") " pod="calico-system/csi-node-driver-txlwn" Jul 7 06:09:14.855500 kubelet[2485]: E0707 06:09:14.855333 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.855500 kubelet[2485]: W0707 06:09:14.855356 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.855500 kubelet[2485]: E0707 06:09:14.855370 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.855500 kubelet[2485]: I0707 06:09:14.855388 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf4tl\" (UniqueName: \"kubernetes.io/projected/5705c009-0d57-436d-b155-b8ac4388465f-kube-api-access-lf4tl\") pod \"csi-node-driver-txlwn\" (UID: \"5705c009-0d57-436d-b155-b8ac4388465f\") " pod="calico-system/csi-node-driver-txlwn" Jul 7 06:09:14.856236 kubelet[2485]: E0707 06:09:14.856088 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.856236 kubelet[2485]: W0707 06:09:14.856106 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.857278 kubelet[2485]: E0707 06:09:14.857042 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.857278 kubelet[2485]: I0707 06:09:14.857083 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5705c009-0d57-436d-b155-b8ac4388465f-kubelet-dir\") pod \"csi-node-driver-txlwn\" (UID: \"5705c009-0d57-436d-b155-b8ac4388465f\") " pod="calico-system/csi-node-driver-txlwn" Jul 7 06:09:14.861224 kubelet[2485]: E0707 06:09:14.861071 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.861224 kubelet[2485]: W0707 06:09:14.861089 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.861224 kubelet[2485]: E0707 06:09:14.861144 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.861404 kubelet[2485]: E0707 06:09:14.861393 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.861546 kubelet[2485]: W0707 06:09:14.861456 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.861546 kubelet[2485]: E0707 06:09:14.861505 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.861863 kubelet[2485]: E0707 06:09:14.861693 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.861863 kubelet[2485]: W0707 06:09:14.861705 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.862026 kubelet[2485]: E0707 06:09:14.861956 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.863115 kubelet[2485]: E0707 06:09:14.863090 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.863115 kubelet[2485]: W0707 06:09:14.863109 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.863115 kubelet[2485]: E0707 06:09:14.863154 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.863387 kubelet[2485]: E0707 06:09:14.863324 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.863387 kubelet[2485]: W0707 06:09:14.863336 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.863387 kubelet[2485]: E0707 06:09:14.863349 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.863591 kubelet[2485]: E0707 06:09:14.863529 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.863591 kubelet[2485]: W0707 06:09:14.863538 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.863591 kubelet[2485]: E0707 06:09:14.863547 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.863760 kubelet[2485]: E0707 06:09:14.863716 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.863760 kubelet[2485]: W0707 06:09:14.863729 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.863760 kubelet[2485]: E0707 06:09:14.863738 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.865783 kubelet[2485]: E0707 06:09:14.863893 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.865783 kubelet[2485]: W0707 06:09:14.863902 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.865783 kubelet[2485]: E0707 06:09:14.863913 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.873760 containerd[1443]: time="2025-07-07T06:09:14.873626113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:09:14.873760 containerd[1443]: time="2025-07-07T06:09:14.873675731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:09:14.873760 containerd[1443]: time="2025-07-07T06:09:14.873686895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:14.874534 containerd[1443]: time="2025-07-07T06:09:14.874346493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:14.891149 systemd[1]: Started cri-containerd-de210278b0316ef2d18ebfcb89ef6fb2d46d43754773bd69efb4c118eb57a6ec.scope - libcontainer container de210278b0316ef2d18ebfcb89ef6fb2d46d43754773bd69efb4c118eb57a6ec. Jul 7 06:09:14.922119 containerd[1443]: time="2025-07-07T06:09:14.922076756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l2zc6,Uid:caed9238-72d5-48e5-973d-845d4dff0759,Namespace:calico-system,Attempt:0,} returns sandbox id \"de210278b0316ef2d18ebfcb89ef6fb2d46d43754773bd69efb4c118eb57a6ec\"" Jul 7 06:09:14.959388 kubelet[2485]: E0707 06:09:14.959305 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.959388 kubelet[2485]: W0707 06:09:14.959329 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.960118 kubelet[2485]: E0707 06:09:14.959950 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.961357 kubelet[2485]: E0707 06:09:14.961330 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.961357 kubelet[2485]: W0707 06:09:14.961351 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.961451 kubelet[2485]: E0707 06:09:14.961371 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.962333 kubelet[2485]: E0707 06:09:14.962178 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.962333 kubelet[2485]: W0707 06:09:14.962198 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.962333 kubelet[2485]: E0707 06:09:14.962220 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.962512 kubelet[2485]: E0707 06:09:14.962499 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.962575 kubelet[2485]: W0707 06:09:14.962564 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.962635 kubelet[2485]: E0707 06:09:14.962624 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.962882 kubelet[2485]: E0707 06:09:14.962863 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.962882 kubelet[2485]: W0707 06:09:14.962881 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.962958 kubelet[2485]: E0707 06:09:14.962902 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.963341 kubelet[2485]: E0707 06:09:14.963325 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.963383 kubelet[2485]: W0707 06:09:14.963342 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.963514 kubelet[2485]: E0707 06:09:14.963438 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.963904 kubelet[2485]: E0707 06:09:14.963889 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.963935 kubelet[2485]: W0707 06:09:14.963904 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.963974 kubelet[2485]: E0707 06:09:14.963942 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.964417 kubelet[2485]: E0707 06:09:14.964401 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.964456 kubelet[2485]: W0707 06:09:14.964418 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.964456 kubelet[2485]: E0707 06:09:14.964446 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.964749 kubelet[2485]: E0707 06:09:14.964733 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.964749 kubelet[2485]: W0707 06:09:14.964748 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.964820 kubelet[2485]: E0707 06:09:14.964775 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.965004 kubelet[2485]: E0707 06:09:14.964991 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.965048 kubelet[2485]: W0707 06:09:14.965004 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.965080 kubelet[2485]: E0707 06:09:14.965044 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.965230 kubelet[2485]: E0707 06:09:14.965217 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.965230 kubelet[2485]: W0707 06:09:14.965229 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.965274 kubelet[2485]: E0707 06:09:14.965255 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.965433 kubelet[2485]: E0707 06:09:14.965418 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.965433 kubelet[2485]: W0707 06:09:14.965432 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.965517 kubelet[2485]: E0707 06:09:14.965493 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.965692 kubelet[2485]: E0707 06:09:14.965679 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.965721 kubelet[2485]: W0707 06:09:14.965697 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.965721 kubelet[2485]: E0707 06:09:14.965713 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.965917 kubelet[2485]: E0707 06:09:14.965902 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.965917 kubelet[2485]: W0707 06:09:14.965916 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.965983 kubelet[2485]: E0707 06:09:14.965927 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.966124 kubelet[2485]: E0707 06:09:14.966111 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.966158 kubelet[2485]: W0707 06:09:14.966124 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.966273 kubelet[2485]: E0707 06:09:14.966203 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.966310 kubelet[2485]: E0707 06:09:14.966281 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.966310 kubelet[2485]: W0707 06:09:14.966296 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.966507 kubelet[2485]: E0707 06:09:14.966389 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.966507 kubelet[2485]: E0707 06:09:14.966437 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.966507 kubelet[2485]: W0707 06:09:14.966461 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.966507 kubelet[2485]: E0707 06:09:14.966482 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.966656 kubelet[2485]: E0707 06:09:14.966640 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.966656 kubelet[2485]: W0707 06:09:14.966652 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.966701 kubelet[2485]: E0707 06:09:14.966663 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.966934 kubelet[2485]: E0707 06:09:14.966903 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.966934 kubelet[2485]: W0707 06:09:14.966933 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.967039 kubelet[2485]: E0707 06:09:14.966943 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.967184 kubelet[2485]: E0707 06:09:14.967168 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.967208 kubelet[2485]: W0707 06:09:14.967182 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.967208 kubelet[2485]: E0707 06:09:14.967196 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.967413 kubelet[2485]: E0707 06:09:14.967399 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.967445 kubelet[2485]: W0707 06:09:14.967414 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.967445 kubelet[2485]: E0707 06:09:14.967441 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.967668 kubelet[2485]: E0707 06:09:14.967640 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.967697 kubelet[2485]: W0707 06:09:14.967668 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.967757 kubelet[2485]: E0707 06:09:14.967744 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.968065 kubelet[2485]: E0707 06:09:14.968047 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.968171 kubelet[2485]: W0707 06:09:14.968063 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.968255 kubelet[2485]: E0707 06:09:14.968219 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.968424 kubelet[2485]: E0707 06:09:14.968409 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.968424 kubelet[2485]: W0707 06:09:14.968423 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.968488 kubelet[2485]: E0707 06:09:14.968434 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.968857 kubelet[2485]: E0707 06:09:14.968836 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.968857 kubelet[2485]: W0707 06:09:14.968854 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.968928 kubelet[2485]: E0707 06:09:14.968865 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:14.978453 kubelet[2485]: E0707 06:09:14.978394 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:14.978453 kubelet[2485]: W0707 06:09:14.978412 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:14.978453 kubelet[2485]: E0707 06:09:14.978426 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:15.574628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount419936820.mount: Deactivated successfully. Jul 7 06:09:15.917149 containerd[1443]: time="2025-07-07T06:09:15.917027861Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:15.917670 containerd[1443]: time="2025-07-07T06:09:15.917598298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 7 06:09:15.918494 containerd[1443]: time="2025-07-07T06:09:15.918465677Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:15.921083 containerd[1443]: time="2025-07-07T06:09:15.921044446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:15.921760 containerd[1443]: time="2025-07-07T06:09:15.921715557Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.223570326s" Jul 7 06:09:15.921760 containerd[1443]: time="2025-07-07T06:09:15.921753610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 7 06:09:15.922787 containerd[1443]: time="2025-07-07T06:09:15.922516073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 06:09:15.938785 containerd[1443]: time="2025-07-07T06:09:15.938742787Z" level=info msg="CreateContainer within sandbox \"aed64b58719ac1d2d53aa02884ba878d457f982875ac4c04ceee80a2b2863ff3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 06:09:15.980338 containerd[1443]: time="2025-07-07T06:09:15.980249416Z" level=info msg="CreateContainer within sandbox \"aed64b58719ac1d2d53aa02884ba878d457f982875ac4c04ceee80a2b2863ff3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e6648fe2a12a8533de5bfae7698a70825e42bab0eb054e609a7eff9e1f026e9b\"" Jul 7 06:09:15.981003 containerd[1443]: time="2025-07-07T06:09:15.980941774Z" level=info msg="StartContainer for \"e6648fe2a12a8533de5bfae7698a70825e42bab0eb054e609a7eff9e1f026e9b\"" Jul 7 06:09:16.009135 systemd[1]: Started cri-containerd-e6648fe2a12a8533de5bfae7698a70825e42bab0eb054e609a7eff9e1f026e9b.scope - libcontainer container e6648fe2a12a8533de5bfae7698a70825e42bab0eb054e609a7eff9e1f026e9b. Jul 7 06:09:16.037017 containerd[1443]: time="2025-07-07T06:09:16.036972746Z" level=info msg="StartContainer for \"e6648fe2a12a8533de5bfae7698a70825e42bab0eb054e609a7eff9e1f026e9b\" returns successfully" Jul 7 06:09:16.624264 kubelet[2485]: E0707 06:09:16.624194 2485 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-txlwn" podUID="5705c009-0d57-436d-b155-b8ac4388465f" Jul 7 06:09:16.705266 kubelet[2485]: E0707 06:09:16.704938 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:09:16.714814 kubelet[2485]: I0707 06:09:16.714759 2485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5d794dff9b-fdhfd" podStartSLOduration=1.489884541 podStartE2EDuration="2.71474048s" podCreationTimestamp="2025-07-07 06:09:14 +0000 UTC" firstStartedPulling="2025-07-07 06:09:14.697539893 +0000 UTC m=+19.160394629" lastFinishedPulling="2025-07-07 06:09:15.922395832 +0000 UTC m=+20.385250568" observedRunningTime="2025-07-07 06:09:16.713237024 +0000 UTC m=+21.176091760" watchObservedRunningTime="2025-07-07 06:09:16.71474048 +0000 UTC m=+21.177595216" Jul 7 06:09:16.737586 kubelet[2485]: E0707 06:09:16.737457 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.737586 kubelet[2485]: W0707 06:09:16.737482 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.737586 kubelet[2485]: E0707 06:09:16.737504 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.737842 kubelet[2485]: E0707 06:09:16.737828 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.737942 kubelet[2485]: W0707 06:09:16.737894 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.738031 kubelet[2485]: E0707 06:09:16.738017 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.738433 kubelet[2485]: E0707 06:09:16.738324 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.738433 kubelet[2485]: W0707 06:09:16.738337 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.738433 kubelet[2485]: E0707 06:09:16.738349 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.738622 kubelet[2485]: E0707 06:09:16.738600 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.738687 kubelet[2485]: W0707 06:09:16.738675 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.738852 kubelet[2485]: E0707 06:09:16.738746 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.738996 kubelet[2485]: E0707 06:09:16.738982 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.739155 kubelet[2485]: W0707 06:09:16.739056 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.739155 kubelet[2485]: E0707 06:09:16.739073 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.739291 kubelet[2485]: E0707 06:09:16.739279 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.739345 kubelet[2485]: W0707 06:09:16.739334 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.739411 kubelet[2485]: E0707 06:09:16.739399 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.739651 kubelet[2485]: E0707 06:09:16.739632 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.739754 kubelet[2485]: W0707 06:09:16.739740 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.739816 kubelet[2485]: E0707 06:09:16.739804 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.740082 kubelet[2485]: E0707 06:09:16.740067 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.740183 kubelet[2485]: W0707 06:09:16.740162 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.740269 kubelet[2485]: E0707 06:09:16.740257 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.740637 kubelet[2485]: E0707 06:09:16.740611 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.740806 kubelet[2485]: W0707 06:09:16.740710 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.740806 kubelet[2485]: E0707 06:09:16.740728 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.740944 kubelet[2485]: E0707 06:09:16.740925 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.741015 kubelet[2485]: W0707 06:09:16.741003 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.741159 kubelet[2485]: E0707 06:09:16.741079 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.741322 kubelet[2485]: E0707 06:09:16.741300 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.741449 kubelet[2485]: W0707 06:09:16.741388 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.741449 kubelet[2485]: E0707 06:09:16.741404 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.741793 kubelet[2485]: E0707 06:09:16.741687 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.741793 kubelet[2485]: W0707 06:09:16.741701 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.741793 kubelet[2485]: E0707 06:09:16.741711 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.741984 kubelet[2485]: E0707 06:09:16.741957 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.742140 kubelet[2485]: W0707 06:09:16.742039 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.742140 kubelet[2485]: E0707 06:09:16.742055 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.742278 kubelet[2485]: E0707 06:09:16.742264 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.742330 kubelet[2485]: W0707 06:09:16.742319 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.742471 kubelet[2485]: E0707 06:09:16.742384 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.742578 kubelet[2485]: E0707 06:09:16.742565 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.742657 kubelet[2485]: W0707 06:09:16.742645 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.742716 kubelet[2485]: E0707 06:09:16.742705 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.778395 kubelet[2485]: E0707 06:09:16.778328 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.778395 kubelet[2485]: W0707 06:09:16.778350 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.778395 kubelet[2485]: E0707 06:09:16.778371 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.778737 kubelet[2485]: E0707 06:09:16.778722 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.778737 kubelet[2485]: W0707 06:09:16.778736 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.778815 kubelet[2485]: E0707 06:09:16.778755 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.779172 kubelet[2485]: E0707 06:09:16.779155 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.779172 kubelet[2485]: W0707 06:09:16.779170 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.779266 kubelet[2485]: E0707 06:09:16.779187 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.779836 kubelet[2485]: E0707 06:09:16.779770 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.779836 kubelet[2485]: W0707 06:09:16.779785 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.779836 kubelet[2485]: E0707 06:09:16.779801 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.780270 kubelet[2485]: E0707 06:09:16.780131 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.780270 kubelet[2485]: W0707 06:09:16.780145 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.780270 kubelet[2485]: E0707 06:09:16.780183 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.780395 kubelet[2485]: E0707 06:09:16.780294 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.780395 kubelet[2485]: W0707 06:09:16.780302 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.780395 kubelet[2485]: E0707 06:09:16.780351 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.780728 kubelet[2485]: E0707 06:09:16.780587 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.780728 kubelet[2485]: W0707 06:09:16.780598 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.780728 kubelet[2485]: E0707 06:09:16.780635 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.780974 kubelet[2485]: E0707 06:09:16.780917 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.780974 kubelet[2485]: W0707 06:09:16.780929 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.780974 kubelet[2485]: E0707 06:09:16.780945 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.781381 kubelet[2485]: E0707 06:09:16.781315 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.781381 kubelet[2485]: W0707 06:09:16.781328 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.781381 kubelet[2485]: E0707 06:09:16.781344 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.781988 kubelet[2485]: E0707 06:09:16.781822 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.781988 kubelet[2485]: W0707 06:09:16.781850 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.781988 kubelet[2485]: E0707 06:09:16.781872 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.782172 kubelet[2485]: E0707 06:09:16.782156 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.782225 kubelet[2485]: W0707 06:09:16.782214 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.782321 kubelet[2485]: E0707 06:09:16.782294 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.782761 kubelet[2485]: E0707 06:09:16.782575 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.782761 kubelet[2485]: W0707 06:09:16.782625 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.782761 kubelet[2485]: E0707 06:09:16.782655 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.783393 kubelet[2485]: E0707 06:09:16.783231 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.783393 kubelet[2485]: W0707 06:09:16.783251 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.783393 kubelet[2485]: E0707 06:09:16.783270 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.783718 kubelet[2485]: E0707 06:09:16.783698 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.784191 kubelet[2485]: W0707 06:09:16.784024 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.784191 kubelet[2485]: E0707 06:09:16.784149 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.784583 kubelet[2485]: E0707 06:09:16.784563 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.785172 kubelet[2485]: W0707 06:09:16.784723 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.785172 kubelet[2485]: E0707 06:09:16.784756 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.785172 kubelet[2485]: E0707 06:09:16.785040 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.785172 kubelet[2485]: W0707 06:09:16.785053 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.785172 kubelet[2485]: E0707 06:09:16.785071 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.785407 kubelet[2485]: E0707 06:09:16.785385 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.785407 kubelet[2485]: W0707 06:09:16.785403 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.785478 kubelet[2485]: E0707 06:09:16.785418 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.785767 kubelet[2485]: E0707 06:09:16.785745 2485 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:09:16.785767 kubelet[2485]: W0707 06:09:16.785758 2485 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:09:16.785767 kubelet[2485]: E0707 06:09:16.785769 2485 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:09:16.795917 containerd[1443]: time="2025-07-07T06:09:16.795878225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:16.797600 containerd[1443]: time="2025-07-07T06:09:16.797468869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 7 06:09:16.798370 containerd[1443]: time="2025-07-07T06:09:16.798331674Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:16.802052 containerd[1443]: time="2025-07-07T06:09:16.801989200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:16.802988 containerd[1443]: time="2025-07-07T06:09:16.802890177Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 880.341013ms" Jul 7 06:09:16.802988 containerd[1443]: time="2025-07-07T06:09:16.802937312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 7 06:09:16.805766 containerd[1443]: time="2025-07-07T06:09:16.805733754Z" level=info msg="CreateContainer within sandbox \"de210278b0316ef2d18ebfcb89ef6fb2d46d43754773bd69efb4c118eb57a6ec\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 06:09:16.816854 containerd[1443]: time="2025-07-07T06:09:16.816819808Z" level=info msg="CreateContainer within sandbox \"de210278b0316ef2d18ebfcb89ef6fb2d46d43754773bd69efb4c118eb57a6ec\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1c38bd0d478eab4f4024792bcdcb09db1f1eff141eb2418defb5ffcc00058bf8\"" Jul 7 06:09:16.817325 containerd[1443]: time="2025-07-07T06:09:16.817276479Z" level=info msg="StartContainer for \"1c38bd0d478eab4f4024792bcdcb09db1f1eff141eb2418defb5ffcc00058bf8\"" Jul 7 06:09:16.842122 systemd[1]: Started cri-containerd-1c38bd0d478eab4f4024792bcdcb09db1f1eff141eb2418defb5ffcc00058bf8.scope - libcontainer container 1c38bd0d478eab4f4024792bcdcb09db1f1eff141eb2418defb5ffcc00058bf8. Jul 7 06:09:16.879778 containerd[1443]: time="2025-07-07T06:09:16.879580736Z" level=info msg="StartContainer for \"1c38bd0d478eab4f4024792bcdcb09db1f1eff141eb2418defb5ffcc00058bf8\" returns successfully" Jul 7 06:09:16.896267 systemd[1]: cri-containerd-1c38bd0d478eab4f4024792bcdcb09db1f1eff141eb2418defb5ffcc00058bf8.scope: Deactivated successfully. Jul 7 06:09:16.917425 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c38bd0d478eab4f4024792bcdcb09db1f1eff141eb2418defb5ffcc00058bf8-rootfs.mount: Deactivated successfully. Jul 7 06:09:16.998427 containerd[1443]: time="2025-07-07T06:09:16.998364091Z" level=info msg="shim disconnected" id=1c38bd0d478eab4f4024792bcdcb09db1f1eff141eb2418defb5ffcc00058bf8 namespace=k8s.io Jul 7 06:09:16.999414 containerd[1443]: time="2025-07-07T06:09:16.998435714Z" level=warning msg="cleaning up after shim disconnected" id=1c38bd0d478eab4f4024792bcdcb09db1f1eff141eb2418defb5ffcc00058bf8 namespace=k8s.io Jul 7 06:09:16.999414 containerd[1443]: time="2025-07-07T06:09:16.998445598Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:09:17.008044 containerd[1443]: time="2025-07-07T06:09:17.007998287Z" level=warning msg="cleanup warnings time=\"2025-07-07T06:09:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 7 06:09:17.706854 kubelet[2485]: I0707 06:09:17.706809 2485 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:09:17.707298 kubelet[2485]: E0707 06:09:17.707138 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:09:17.708492 containerd[1443]: time="2025-07-07T06:09:17.708453820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 06:09:18.624229 kubelet[2485]: E0707 06:09:18.624164 2485 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-txlwn" podUID="5705c009-0d57-436d-b155-b8ac4388465f" Jul 7 06:09:20.309503 containerd[1443]: time="2025-07-07T06:09:20.309443024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:20.309997 containerd[1443]: time="2025-07-07T06:09:20.309947284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 7 06:09:20.310831 containerd[1443]: time="2025-07-07T06:09:20.310771673Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:20.312831 containerd[1443]: time="2025-07-07T06:09:20.312798557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:20.313656 containerd[1443]: time="2025-07-07T06:09:20.313624267Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.605130154s" Jul 7 06:09:20.313656 containerd[1443]: time="2025-07-07T06:09:20.313654195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 7 06:09:20.317366 containerd[1443]: time="2025-07-07T06:09:20.317325336Z" level=info msg="CreateContainer within sandbox \"de210278b0316ef2d18ebfcb89ef6fb2d46d43754773bd69efb4c118eb57a6ec\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 06:09:20.330281 containerd[1443]: time="2025-07-07T06:09:20.330232405Z" level=info msg="CreateContainer within sandbox \"de210278b0316ef2d18ebfcb89ef6fb2d46d43754773bd69efb4c118eb57a6ec\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e811eb86a9cce7f0bd92211802c99adcbfe0d02f78bc1a76ee884e6cdea50396\"" Jul 7 06:09:20.331201 containerd[1443]: time="2025-07-07T06:09:20.330626715Z" level=info msg="StartContainer for \"e811eb86a9cce7f0bd92211802c99adcbfe0d02f78bc1a76ee884e6cdea50396\"" Jul 7 06:09:20.361156 systemd[1]: Started cri-containerd-e811eb86a9cce7f0bd92211802c99adcbfe0d02f78bc1a76ee884e6cdea50396.scope - libcontainer container e811eb86a9cce7f0bd92211802c99adcbfe0d02f78bc1a76ee884e6cdea50396. Jul 7 06:09:20.392125 containerd[1443]: time="2025-07-07T06:09:20.392077643Z" level=info msg="StartContainer for \"e811eb86a9cce7f0bd92211802c99adcbfe0d02f78bc1a76ee884e6cdea50396\" returns successfully" Jul 7 06:09:20.624449 kubelet[2485]: E0707 06:09:20.624221 2485 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-txlwn" podUID="5705c009-0d57-436d-b155-b8ac4388465f" Jul 7 06:09:21.025712 systemd[1]: cri-containerd-e811eb86a9cce7f0bd92211802c99adcbfe0d02f78bc1a76ee884e6cdea50396.scope: Deactivated successfully. Jul 7 06:09:21.046016 containerd[1443]: time="2025-07-07T06:09:21.045934976Z" level=info msg="shim disconnected" id=e811eb86a9cce7f0bd92211802c99adcbfe0d02f78bc1a76ee884e6cdea50396 namespace=k8s.io Jul 7 06:09:21.046016 containerd[1443]: time="2025-07-07T06:09:21.046006635Z" level=warning msg="cleaning up after shim disconnected" id=e811eb86a9cce7f0bd92211802c99adcbfe0d02f78bc1a76ee884e6cdea50396 namespace=k8s.io Jul 7 06:09:21.046016 containerd[1443]: time="2025-07-07T06:09:21.046015597Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:09:21.116190 kubelet[2485]: I0707 06:09:21.116159 2485 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 06:09:21.181408 systemd[1]: Created slice kubepods-burstable-pod6a09f92b_f03b_46c8_9b26_0233f582bf66.slice - libcontainer container kubepods-burstable-pod6a09f92b_f03b_46c8_9b26_0233f582bf66.slice. Jul 7 06:09:21.190177 systemd[1]: Created slice kubepods-burstable-podacb26600_e422_4fa9_86c9_1e99272ac907.slice - libcontainer container kubepods-burstable-podacb26600_e422_4fa9_86c9_1e99272ac907.slice. Jul 7 06:09:21.206177 systemd[1]: Created slice kubepods-besteffort-pod058206b3_65d3_47c5_ac92_f4a3b7ef1d3d.slice - libcontainer container kubepods-besteffort-pod058206b3_65d3_47c5_ac92_f4a3b7ef1d3d.slice. Jul 7 06:09:21.211157 systemd[1]: Created slice kubepods-besteffort-podb4655dce_563f_40a1_900f_1c03e1a27866.slice - libcontainer container kubepods-besteffort-podb4655dce_563f_40a1_900f_1c03e1a27866.slice. Jul 7 06:09:21.214470 kubelet[2485]: I0707 06:09:21.214389 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq622\" (UniqueName: \"kubernetes.io/projected/058206b3-65d3-47c5-ac92-f4a3b7ef1d3d-kube-api-access-mq622\") pod \"calico-kube-controllers-7fbfd84b85-7qqnq\" (UID: \"058206b3-65d3-47c5-ac92-f4a3b7ef1d3d\") " pod="calico-system/calico-kube-controllers-7fbfd84b85-7qqnq" Jul 7 06:09:21.214470 kubelet[2485]: I0707 06:09:21.214448 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6p42\" (UniqueName: \"kubernetes.io/projected/6a09f92b-f03b-46c8-9b26-0233f582bf66-kube-api-access-r6p42\") pod \"coredns-668d6bf9bc-m7xfp\" (UID: \"6a09f92b-f03b-46c8-9b26-0233f582bf66\") " pod="kube-system/coredns-668d6bf9bc-m7xfp" Jul 7 06:09:21.214599 kubelet[2485]: I0707 06:09:21.214536 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a09f92b-f03b-46c8-9b26-0233f582bf66-config-volume\") pod \"coredns-668d6bf9bc-m7xfp\" (UID: \"6a09f92b-f03b-46c8-9b26-0233f582bf66\") " pod="kube-system/coredns-668d6bf9bc-m7xfp" Jul 7 06:09:21.214599 kubelet[2485]: I0707 06:09:21.214559 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/058206b3-65d3-47c5-ac92-f4a3b7ef1d3d-tigera-ca-bundle\") pod \"calico-kube-controllers-7fbfd84b85-7qqnq\" (UID: \"058206b3-65d3-47c5-ac92-f4a3b7ef1d3d\") " pod="calico-system/calico-kube-controllers-7fbfd84b85-7qqnq" Jul 7 06:09:21.215871 kubelet[2485]: I0707 06:09:21.215178 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbcjg\" (UniqueName: \"kubernetes.io/projected/acb26600-e422-4fa9-86c9-1e99272ac907-kube-api-access-pbcjg\") pod \"coredns-668d6bf9bc-wtdw2\" (UID: \"acb26600-e422-4fa9-86c9-1e99272ac907\") " pod="kube-system/coredns-668d6bf9bc-wtdw2" Jul 7 06:09:21.215871 kubelet[2485]: I0707 06:09:21.215234 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/acb26600-e422-4fa9-86c9-1e99272ac907-config-volume\") pod \"coredns-668d6bf9bc-wtdw2\" (UID: \"acb26600-e422-4fa9-86c9-1e99272ac907\") " pod="kube-system/coredns-668d6bf9bc-wtdw2" Jul 7 06:09:21.224389 systemd[1]: Created slice kubepods-besteffort-pod0f7848e9_158e_4510_8474_f086afb371a7.slice - libcontainer container kubepods-besteffort-pod0f7848e9_158e_4510_8474_f086afb371a7.slice. Jul 7 06:09:21.231511 systemd[1]: Created slice kubepods-besteffort-podc16f1b03_a360_4556_a60c_eadfcd16ef1e.slice - libcontainer container kubepods-besteffort-podc16f1b03_a360_4556_a60c_eadfcd16ef1e.slice. Jul 7 06:09:21.237909 systemd[1]: Created slice kubepods-besteffort-pod20b82c11_f0c8_4cab_bff0_a1f67bee9ab4.slice - libcontainer container kubepods-besteffort-pod20b82c11_f0c8_4cab_bff0_a1f67bee9ab4.slice. Jul 7 06:09:21.243033 systemd[1]: Created slice kubepods-besteffort-pod2b552d16_8bbf_4c9c_b453_0c942c087079.slice - libcontainer container kubepods-besteffort-pod2b552d16_8bbf_4c9c_b453_0c942c087079.slice. Jul 7 06:09:21.316090 kubelet[2485]: I0707 06:09:21.315915 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f7848e9-158e-4510-8474-f086afb371a7-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-lmf8d\" (UID: \"0f7848e9-158e-4510-8474-f086afb371a7\") " pod="calico-system/goldmane-768f4c5c69-lmf8d" Jul 7 06:09:21.316090 kubelet[2485]: I0707 06:09:21.316002 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b4655dce-563f-40a1-900f-1c03e1a27866-whisker-backend-key-pair\") pod \"whisker-54c9c5d6b7-8slwf\" (UID: \"b4655dce-563f-40a1-900f-1c03e1a27866\") " pod="calico-system/whisker-54c9c5d6b7-8slwf" Jul 7 06:09:21.316090 kubelet[2485]: I0707 06:09:21.316052 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7q5c\" (UniqueName: \"kubernetes.io/projected/20b82c11-f0c8-4cab-bff0-a1f67bee9ab4-kube-api-access-s7q5c\") pod \"calico-apiserver-586767dc6-st5cc\" (UID: \"20b82c11-f0c8-4cab-bff0-a1f67bee9ab4\") " pod="calico-apiserver/calico-apiserver-586767dc6-st5cc" Jul 7 06:09:21.316090 kubelet[2485]: I0707 06:09:21.316094 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0f7848e9-158e-4510-8474-f086afb371a7-goldmane-key-pair\") pod \"goldmane-768f4c5c69-lmf8d\" (UID: \"0f7848e9-158e-4510-8474-f086afb371a7\") " pod="calico-system/goldmane-768f4c5c69-lmf8d" Jul 7 06:09:21.316376 kubelet[2485]: I0707 06:09:21.316114 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2b552d16-8bbf-4c9c-b453-0c942c087079-calico-apiserver-certs\") pod \"calico-apiserver-5d7db7788-k7kxf\" (UID: \"2b552d16-8bbf-4c9c-b453-0c942c087079\") " pod="calico-apiserver/calico-apiserver-5d7db7788-k7kxf" Jul 7 06:09:21.316376 kubelet[2485]: I0707 06:09:21.316143 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f7848e9-158e-4510-8474-f086afb371a7-config\") pod \"goldmane-768f4c5c69-lmf8d\" (UID: \"0f7848e9-158e-4510-8474-f086afb371a7\") " pod="calico-system/goldmane-768f4c5c69-lmf8d" Jul 7 06:09:21.316376 kubelet[2485]: I0707 06:09:21.316174 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/20b82c11-f0c8-4cab-bff0-a1f67bee9ab4-calico-apiserver-certs\") pod \"calico-apiserver-586767dc6-st5cc\" (UID: \"20b82c11-f0c8-4cab-bff0-a1f67bee9ab4\") " pod="calico-apiserver/calico-apiserver-586767dc6-st5cc" Jul 7 06:09:21.316376 kubelet[2485]: I0707 06:09:21.316202 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9282k\" (UniqueName: \"kubernetes.io/projected/2b552d16-8bbf-4c9c-b453-0c942c087079-kube-api-access-9282k\") pod \"calico-apiserver-5d7db7788-k7kxf\" (UID: \"2b552d16-8bbf-4c9c-b453-0c942c087079\") " pod="calico-apiserver/calico-apiserver-5d7db7788-k7kxf" Jul 7 06:09:21.316376 kubelet[2485]: I0707 06:09:21.316222 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4smm\" (UniqueName: \"kubernetes.io/projected/c16f1b03-a360-4556-a60c-eadfcd16ef1e-kube-api-access-m4smm\") pod \"calico-apiserver-5d7db7788-2k4fl\" (UID: \"c16f1b03-a360-4556-a60c-eadfcd16ef1e\") " pod="calico-apiserver/calico-apiserver-5d7db7788-2k4fl" Jul 7 06:09:21.316489 kubelet[2485]: I0707 06:09:21.316250 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4s4g\" (UniqueName: \"kubernetes.io/projected/b4655dce-563f-40a1-900f-1c03e1a27866-kube-api-access-n4s4g\") pod \"whisker-54c9c5d6b7-8slwf\" (UID: \"b4655dce-563f-40a1-900f-1c03e1a27866\") " pod="calico-system/whisker-54c9c5d6b7-8slwf" Jul 7 06:09:21.316489 kubelet[2485]: I0707 06:09:21.316270 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9fmk\" (UniqueName: \"kubernetes.io/projected/0f7848e9-158e-4510-8474-f086afb371a7-kube-api-access-h9fmk\") pod \"goldmane-768f4c5c69-lmf8d\" (UID: \"0f7848e9-158e-4510-8474-f086afb371a7\") " pod="calico-system/goldmane-768f4c5c69-lmf8d" Jul 7 06:09:21.316489 kubelet[2485]: I0707 06:09:21.316287 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c16f1b03-a360-4556-a60c-eadfcd16ef1e-calico-apiserver-certs\") pod \"calico-apiserver-5d7db7788-2k4fl\" (UID: \"c16f1b03-a360-4556-a60c-eadfcd16ef1e\") " pod="calico-apiserver/calico-apiserver-5d7db7788-2k4fl" Jul 7 06:09:21.316489 kubelet[2485]: I0707 06:09:21.316305 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4655dce-563f-40a1-900f-1c03e1a27866-whisker-ca-bundle\") pod \"whisker-54c9c5d6b7-8slwf\" (UID: \"b4655dce-563f-40a1-900f-1c03e1a27866\") " pod="calico-system/whisker-54c9c5d6b7-8slwf" Jul 7 06:09:21.325183 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e811eb86a9cce7f0bd92211802c99adcbfe0d02f78bc1a76ee884e6cdea50396-rootfs.mount: Deactivated successfully. Jul 7 06:09:21.488258 kubelet[2485]: E0707 06:09:21.488035 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:09:21.489022 containerd[1443]: time="2025-07-07T06:09:21.488528290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m7xfp,Uid:6a09f92b-f03b-46c8-9b26-0233f582bf66,Namespace:kube-system,Attempt:0,}" Jul 7 06:09:21.495766 kubelet[2485]: E0707 06:09:21.495715 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:09:21.496470 containerd[1443]: time="2025-07-07T06:09:21.496400953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wtdw2,Uid:acb26600-e422-4fa9-86c9-1e99272ac907,Namespace:kube-system,Attempt:0,}" Jul 7 06:09:21.525834 containerd[1443]: time="2025-07-07T06:09:21.524728681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54c9c5d6b7-8slwf,Uid:b4655dce-563f-40a1-900f-1c03e1a27866,Namespace:calico-system,Attempt:0,}" Jul 7 06:09:21.530916 containerd[1443]: time="2025-07-07T06:09:21.527894007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fbfd84b85-7qqnq,Uid:058206b3-65d3-47c5-ac92-f4a3b7ef1d3d,Namespace:calico-system,Attempt:0,}" Jul 7 06:09:21.530916 containerd[1443]: time="2025-07-07T06:09:21.528173881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-lmf8d,Uid:0f7848e9-158e-4510-8474-f086afb371a7,Namespace:calico-system,Attempt:0,}" Jul 7 06:09:21.544411 containerd[1443]: time="2025-07-07T06:09:21.537477007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7db7788-2k4fl,Uid:c16f1b03-a360-4556-a60c-eadfcd16ef1e,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:09:21.551143 containerd[1443]: time="2025-07-07T06:09:21.550811169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7db7788-k7kxf,Uid:2b552d16-8bbf-4c9c-b453-0c942c087079,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:09:21.551224 containerd[1443]: time="2025-07-07T06:09:21.551157621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-586767dc6-st5cc,Uid:20b82c11-f0c8-4cab-bff0-a1f67bee9ab4,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:09:21.775116 containerd[1443]: time="2025-07-07T06:09:21.775056794Z" level=error msg="Failed to destroy network for sandbox \"da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:21.775736 containerd[1443]: time="2025-07-07T06:09:21.775700646Z" level=error msg="encountered an error cleaning up failed sandbox \"da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:21.775872 containerd[1443]: time="2025-07-07T06:09:21.775848645Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m7xfp,Uid:6a09f92b-f03b-46c8-9b26-0233f582bf66,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:21.777185 kubelet[2485]: E0707 06:09:21.777139 2485 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:21.779710 kubelet[2485]: E0707 06:09:21.779661 2485 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-m7xfp" Jul 7 06:09:21.784113 kubelet[2485]: E0707 06:09:21.784059 2485 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-m7xfp" Jul 7 06:09:21.784202 kubelet[2485]: E0707 06:09:21.784150 2485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-m7xfp_kube-system(6a09f92b-f03b-46c8-9b26-0233f582bf66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-m7xfp_kube-system(6a09f92b-f03b-46c8-9b26-0233f582bf66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-m7xfp" podUID="6a09f92b-f03b-46c8-9b26-0233f582bf66" Jul 7 06:09:21.789417 containerd[1443]: time="2025-07-07T06:09:21.789357654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 06:09:21.824034 containerd[1443]: time="2025-07-07T06:09:21.822412164Z" level=error msg="Failed to destroy network for sandbox \"26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:21.824378 containerd[1443]: time="2025-07-07T06:09:21.824343960Z" level=error msg="encountered an error cleaning up failed sandbox \"26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:21.824425 containerd[1443]: time="2025-07-07T06:09:21.824407537Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54c9c5d6b7-8slwf,Uid:b4655dce-563f-40a1-900f-1c03e1a27866,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:21.824650 kubelet[2485]: E0707 06:09:21.824614 2485 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:21.824713 kubelet[2485]: E0707 06:09:21.824667 2485 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54c9c5d6b7-8slwf" Jul 7 06:09:21.824898 kubelet[2485]: E0707 06:09:21.824686 2485 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54c9c5d6b7-8slwf" Jul 7 06:09:21.824958 kubelet[2485]: E0707 06:09:21.824927 2485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-54c9c5d6b7-8slwf_calico-system(b4655dce-563f-40a1-900f-1c03e1a27866)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-54c9c5d6b7-8slwf_calico-system(b4655dce-563f-40a1-900f-1c03e1a27866)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54c9c5d6b7-8slwf" podUID="b4655dce-563f-40a1-900f-1c03e1a27866" Jul 7 06:09:21.832357 containerd[1443]: time="2025-07-07T06:09:21.832316450Z" level=error msg="Failed to destroy network for sandbox \"a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:21.832675 containerd[1443]: time="2025-07-07T06:09:21.832646338Z" level=error msg="encountered an error cleaning up failed sandbox \"a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:21.832731 containerd[1443]: time="2025-07-07T06:09:21.832696712Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-lmf8d,Uid:0f7848e9-158e-4510-8474-f086afb371a7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:21.832972 kubelet[2485]: E0707 06:09:21.832910 2485 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:21.833693 kubelet[2485]: E0707 06:09:21.833656 2485 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-lmf8d" Jul 7 06:09:21.833806 kubelet[2485]: E0707 06:09:21.833790 2485 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-lmf8d" Jul 7 06:09:21.833945 kubelet[2485]: E0707 06:09:21.833902 2485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-lmf8d_calico-system(0f7848e9-158e-4510-8474-f086afb371a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-lmf8d_calico-system(0f7848e9-158e-4510-8474-f086afb371a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-lmf8d" podUID="0f7848e9-158e-4510-8474-f086afb371a7" Jul 7 06:09:21.836350 containerd[1443]: time="2025-07-07T06:09:21.836293553Z" level=error msg="Failed to destroy network for sandbox \"0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:21.837127 containerd[1443]: time="2025-07-07T06:09:21.837077042Z" level=error msg="encountered an error cleaning up failed sandbox \"0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:21.837198 containerd[1443]: time="2025-07-07T06:09:21.837137218Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wtdw2,Uid:acb26600-e422-4fa9-86c9-1e99272ac907,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:21.837669 kubelet[2485]: E0707 06:09:21.837637 2485 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:21.837754 kubelet[2485]: E0707 06:09:21.837732 2485 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wtdw2" Jul 7 06:09:21.837793 kubelet[2485]: E0707 06:09:21.837753 2485 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wtdw2" Jul 7 06:09:21.837826 kubelet[2485]: E0707 06:09:21.837791 2485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wtdw2_kube-system(acb26600-e422-4fa9-86c9-1e99272ac907)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wtdw2_kube-system(acb26600-e422-4fa9-86c9-1e99272ac907)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wtdw2" podUID="acb26600-e422-4fa9-86c9-1e99272ac907" Jul 7 06:09:21.849517 containerd[1443]: time="2025-07-07T06:09:21.849266978Z" level=error msg="Failed to destroy network for sandbox \"3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:21.849994 containerd[1443]: time="2025-07-07T06:09:21.849886064Z" level=error msg="encountered an error cleaning up failed sandbox \"3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:21.850069 containerd[1443]: time="2025-07-07T06:09:21.849953002Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7db7788-k7kxf,Uid:2b552d16-8bbf-4c9c-b453-0c942c087079,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:21.850655 kubelet[2485]: E0707 06:09:21.850301 2485 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:21.850655 kubelet[2485]: E0707 06:09:21.850356 2485 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7db7788-k7kxf" Jul 7 06:09:21.850655 kubelet[2485]: E0707 06:09:21.850383 2485 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7db7788-k7kxf" Jul 7 06:09:21.850856 containerd[1443]: time="2025-07-07T06:09:21.850594773Z" level=error msg="Failed to destroy network for sandbox \"c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:21.850935 kubelet[2485]: E0707 06:09:21.850419 2485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d7db7788-k7kxf_calico-apiserver(2b552d16-8bbf-4c9c-b453-0c942c087079)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d7db7788-k7kxf_calico-apiserver(2b552d16-8bbf-4c9c-b453-0c942c087079)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d7db7788-k7kxf" podUID="2b552d16-8bbf-4c9c-b453-0c942c087079" Jul 7 06:09:21.851272 containerd[1443]: time="2025-07-07T06:09:21.851239665Z" level=error msg="encountered an error cleaning up failed sandbox \"c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:21.851343 containerd[1443]: time="2025-07-07T06:09:21.851314565Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fbfd84b85-7qqnq,Uid:058206b3-65d3-47c5-ac92-f4a3b7ef1d3d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:21.851559 kubelet[2485]: E0707 06:09:21.851514 2485 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:21.851559 kubelet[2485]: E0707 06:09:21.851550 2485 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fbfd84b85-7qqnq" Jul 7 06:09:21.851718 kubelet[2485]: E0707 06:09:21.851568 2485 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fbfd84b85-7qqnq" Jul 7 06:09:21.851718 kubelet[2485]: E0707 06:09:21.851639 2485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7fbfd84b85-7qqnq_calico-system(058206b3-65d3-47c5-ac92-f4a3b7ef1d3d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7fbfd84b85-7qqnq_calico-system(058206b3-65d3-47c5-ac92-f4a3b7ef1d3d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7fbfd84b85-7qqnq" podUID="058206b3-65d3-47c5-ac92-f4a3b7ef1d3d" Jul 7 06:09:21.862891 containerd[1443]: time="2025-07-07T06:09:21.862852047Z" level=error msg="Failed to destroy network for sandbox \"c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:21.863184 containerd[1443]: time="2025-07-07T06:09:21.863150687Z" level=error msg="encountered an error cleaning up failed sandbox \"c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:21.863230 containerd[1443]: time="2025-07-07T06:09:21.863196419Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7db7788-2k4fl,Uid:c16f1b03-a360-4556-a60c-eadfcd16ef1e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:21.863406 kubelet[2485]: E0707 06:09:21.863377 2485 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:21.863478 kubelet[2485]: E0707 06:09:21.863421 2485 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7db7788-2k4fl" Jul 7 06:09:21.863478 kubelet[2485]: E0707 06:09:21.863441 2485 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7db7788-2k4fl" Jul 7 06:09:21.863577 kubelet[2485]: E0707 06:09:21.863479 2485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d7db7788-2k4fl_calico-apiserver(c16f1b03-a360-4556-a60c-eadfcd16ef1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d7db7788-2k4fl_calico-apiserver(c16f1b03-a360-4556-a60c-eadfcd16ef1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d7db7788-2k4fl" podUID="c16f1b03-a360-4556-a60c-eadfcd16ef1e" Jul 7 06:09:21.863648 containerd[1443]: time="2025-07-07T06:09:21.863582442Z" level=error msg="Failed to destroy network for sandbox \"0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:21.863951 containerd[1443]: time="2025-07-07T06:09:21.863919453Z" level=error msg="encountered an error cleaning up failed sandbox \"0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:21.864009 containerd[1443]: time="2025-07-07T06:09:21.863985350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-586767dc6-st5cc,Uid:20b82c11-f0c8-4cab-bff0-a1f67bee9ab4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:21.864152 kubelet[2485]: E0707 06:09:21.864122 2485 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:21.864195 kubelet[2485]: E0707 06:09:21.864158 2485 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-586767dc6-st5cc" Jul 7 06:09:21.864195 kubelet[2485]: E0707 06:09:21.864186 2485 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-586767dc6-st5cc" Jul 7 06:09:21.864264 kubelet[2485]: E0707 06:09:21.864212 2485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-586767dc6-st5cc_calico-apiserver(20b82c11-f0c8-4cab-bff0-a1f67bee9ab4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-586767dc6-st5cc_calico-apiserver(20b82c11-f0c8-4cab-bff0-a1f67bee9ab4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-586767dc6-st5cc" podUID="20b82c11-f0c8-4cab-bff0-a1f67bee9ab4" Jul 7 06:09:22.630190 systemd[1]: Created slice kubepods-besteffort-pod5705c009_0d57_436d_b155_b8ac4388465f.slice - libcontainer container kubepods-besteffort-pod5705c009_0d57_436d_b155_b8ac4388465f.slice. Jul 7 06:09:22.645149 containerd[1443]: time="2025-07-07T06:09:22.645044856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-txlwn,Uid:5705c009-0d57-436d-b155-b8ac4388465f,Namespace:calico-system,Attempt:0,}" Jul 7 06:09:22.755773 containerd[1443]: time="2025-07-07T06:09:22.755056517Z" level=error msg="Failed to destroy network for sandbox \"781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:22.756393 containerd[1443]: time="2025-07-07T06:09:22.756319641Z" level=error msg="encountered an error cleaning up failed sandbox \"781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:22.756603 containerd[1443]: time="2025-07-07T06:09:22.756504329Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-txlwn,Uid:5705c009-0d57-436d-b155-b8ac4388465f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:22.756843 kubelet[2485]: E0707 06:09:22.756805 2485 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:22.757222 kubelet[2485]: E0707 06:09:22.757035 2485 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-txlwn" Jul 7 06:09:22.757222 kubelet[2485]: E0707 06:09:22.757068 2485 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-txlwn" Jul 7 06:09:22.758868 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f-shm.mount: Deactivated successfully. Jul 7 06:09:22.765095 kubelet[2485]: E0707 06:09:22.757125 2485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-txlwn_calico-system(5705c009-0d57-436d-b155-b8ac4388465f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-txlwn_calico-system(5705c009-0d57-436d-b155-b8ac4388465f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-txlwn" podUID="5705c009-0d57-436d-b155-b8ac4388465f" Jul 7 06:09:22.792020 containerd[1443]: time="2025-07-07T06:09:22.791925188Z" level=info msg="StopPodSandbox for \"0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a\"" Jul 7 06:09:22.792958 containerd[1443]: time="2025-07-07T06:09:22.792915762Z" level=info msg="Ensure that sandbox 0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a in task-service has been cleanup successfully" Jul 7 06:09:22.801054 kubelet[2485]: I0707 06:09:22.801010 2485 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" Jul 7 06:09:22.801473 kubelet[2485]: I0707 06:09:22.801092 2485 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" Jul 7 06:09:22.803146 containerd[1443]: time="2025-07-07T06:09:22.802618094Z" level=info msg="StopPodSandbox for \"c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a\"" Jul 7 06:09:22.803146 containerd[1443]: time="2025-07-07T06:09:22.802808543Z" level=info msg="Ensure that sandbox c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a in task-service has been cleanup successfully" Jul 7 06:09:22.803493 kubelet[2485]: I0707 06:09:22.803460 2485 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" Jul 7 06:09:22.804001 containerd[1443]: time="2025-07-07T06:09:22.803955718Z" level=info msg="StopPodSandbox for \"0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1\"" Jul 7 06:09:22.806404 containerd[1443]: time="2025-07-07T06:09:22.804149048Z" level=info msg="Ensure that sandbox 0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1 in task-service has been cleanup successfully" Jul 7 06:09:22.821574 kubelet[2485]: I0707 06:09:22.821529 2485 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" Jul 7 06:09:22.824206 containerd[1443]: time="2025-07-07T06:09:22.824167550Z" level=info msg="StopPodSandbox for \"781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f\"" Jul 7 06:09:22.824378 containerd[1443]: time="2025-07-07T06:09:22.824356399Z" level=info msg="Ensure that sandbox 781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f in task-service has been cleanup successfully" Jul 7 06:09:22.825483 kubelet[2485]: I0707 06:09:22.825090 2485 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" Jul 7 06:09:22.825565 containerd[1443]: time="2025-07-07T06:09:22.825534621Z" level=info msg="StopPodSandbox for \"c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35\"" Jul 7 06:09:22.825702 containerd[1443]: time="2025-07-07T06:09:22.825672977Z" level=info msg="Ensure that sandbox c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35 in task-service has been cleanup successfully" Jul 7 06:09:22.828254 kubelet[2485]: I0707 06:09:22.828224 2485 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" Jul 7 06:09:22.830145 containerd[1443]: time="2025-07-07T06:09:22.830118079Z" level=info msg="StopPodSandbox for \"26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f\"" Jul 7 06:09:22.830463 kubelet[2485]: I0707 06:09:22.830423 2485 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" Jul 7 06:09:22.830600 containerd[1443]: time="2025-07-07T06:09:22.830575196Z" level=info msg="Ensure that sandbox 26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f in task-service has been cleanup successfully" Jul 7 06:09:22.831943 containerd[1443]: time="2025-07-07T06:09:22.831906978Z" level=info msg="StopPodSandbox for \"da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97\"" Jul 7 06:09:22.832180 containerd[1443]: time="2025-07-07T06:09:22.832131396Z" level=info msg="Ensure that sandbox da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97 in task-service has been cleanup successfully" Jul 7 06:09:22.834685 kubelet[2485]: I0707 06:09:22.834661 2485 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" Jul 7 06:09:22.836544 containerd[1443]: time="2025-07-07T06:09:22.835688430Z" level=info msg="StopPodSandbox for \"3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529\"" Jul 7 06:09:22.836544 containerd[1443]: time="2025-07-07T06:09:22.836132584Z" level=info msg="Ensure that sandbox 3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529 in task-service has been cleanup successfully" Jul 7 06:09:22.838565 kubelet[2485]: I0707 06:09:22.837955 2485 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" Jul 7 06:09:22.839988 containerd[1443]: time="2025-07-07T06:09:22.839019405Z" level=info msg="StopPodSandbox for \"a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba\"" Jul 7 06:09:22.839988 containerd[1443]: time="2025-07-07T06:09:22.839220177Z" level=info msg="Ensure that sandbox a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba in task-service has been cleanup successfully" Jul 7 06:09:22.893334 containerd[1443]: time="2025-07-07T06:09:22.891286272Z" level=error msg="StopPodSandbox for \"0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a\" failed" error="failed to destroy network for sandbox \"0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:22.896007 kubelet[2485]: E0707 06:09:22.895714 2485 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" Jul 7 06:09:22.896007 kubelet[2485]: E0707 06:09:22.895793 2485 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a"} Jul 7 06:09:22.896007 kubelet[2485]: E0707 06:09:22.895854 2485 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"20b82c11-f0c8-4cab-bff0-a1f67bee9ab4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:09:22.896007 kubelet[2485]: E0707 06:09:22.895879 2485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"20b82c11-f0c8-4cab-bff0-a1f67bee9ab4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-586767dc6-st5cc" podUID="20b82c11-f0c8-4cab-bff0-a1f67bee9ab4" Jul 7 06:09:22.896730 containerd[1443]: time="2025-07-07T06:09:22.896685299Z" level=error msg="StopPodSandbox for \"0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1\" failed" error="failed to destroy network for sandbox \"0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:22.897161 kubelet[2485]: E0707 06:09:22.897028 2485 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" Jul 7 06:09:22.897161 kubelet[2485]: E0707 06:09:22.897074 2485 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1"} Jul 7 06:09:22.897161 kubelet[2485]: E0707 06:09:22.897104 2485 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"acb26600-e422-4fa9-86c9-1e99272ac907\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:09:22.897161 kubelet[2485]: E0707 06:09:22.897123 2485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"acb26600-e422-4fa9-86c9-1e99272ac907\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wtdw2" podUID="acb26600-e422-4fa9-86c9-1e99272ac907" Jul 7 06:09:22.907703 containerd[1443]: time="2025-07-07T06:09:22.907638393Z" level=error msg="StopPodSandbox for \"26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f\" failed" error="failed to destroy network for sandbox \"26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:22.907924 kubelet[2485]: E0707 06:09:22.907881 2485 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" Jul 7 06:09:22.907995 kubelet[2485]: E0707 06:09:22.907933 2485 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f"} Jul 7 06:09:22.907995 kubelet[2485]: E0707 06:09:22.907980 2485 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b4655dce-563f-40a1-900f-1c03e1a27866\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:09:22.908082 kubelet[2485]: E0707 06:09:22.908002 2485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b4655dce-563f-40a1-900f-1c03e1a27866\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54c9c5d6b7-8slwf" podUID="b4655dce-563f-40a1-900f-1c03e1a27866" Jul 7 06:09:22.914148 containerd[1443]: time="2025-07-07T06:09:22.914099932Z" level=error msg="StopPodSandbox for \"a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba\" failed" error="failed to destroy network for sandbox \"a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:22.914557 kubelet[2485]: E0707 06:09:22.914505 2485 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" Jul 7 06:09:22.914641 kubelet[2485]: E0707 06:09:22.914569 2485 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba"} Jul 7 06:09:22.914641 kubelet[2485]: E0707 06:09:22.914635 2485 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0f7848e9-158e-4510-8474-f086afb371a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:09:22.914728 kubelet[2485]: E0707 06:09:22.914657 2485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0f7848e9-158e-4510-8474-f086afb371a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-lmf8d" podUID="0f7848e9-158e-4510-8474-f086afb371a7" Jul 7 06:09:22.916542 containerd[1443]: time="2025-07-07T06:09:22.916498148Z" level=error msg="StopPodSandbox for \"c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a\" failed" error="failed to destroy network for sandbox \"c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:22.916862 kubelet[2485]: E0707 06:09:22.916737 2485 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" Jul 7 06:09:22.916862 kubelet[2485]: E0707 06:09:22.916785 2485 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a"} Jul 7 06:09:22.916862 kubelet[2485]: E0707 06:09:22.916813 2485 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c16f1b03-a360-4556-a60c-eadfcd16ef1e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:09:22.916862 kubelet[2485]: E0707 06:09:22.916832 2485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c16f1b03-a360-4556-a60c-eadfcd16ef1e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d7db7788-2k4fl" podUID="c16f1b03-a360-4556-a60c-eadfcd16ef1e" Jul 7 06:09:22.921147 containerd[1443]: time="2025-07-07T06:09:22.920915683Z" level=error msg="StopPodSandbox for \"781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f\" failed" error="failed to destroy network for sandbox \"781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:22.921446 kubelet[2485]: E0707 06:09:22.921279 2485 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" Jul 7 06:09:22.921446 kubelet[2485]: E0707 06:09:22.921317 2485 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f"} Jul 7 06:09:22.921446 kubelet[2485]: E0707 06:09:22.921346 2485 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5705c009-0d57-436d-b155-b8ac4388465f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:09:22.921446 kubelet[2485]: E0707 06:09:22.921379 2485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5705c009-0d57-436d-b155-b8ac4388465f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-txlwn" podUID="5705c009-0d57-436d-b155-b8ac4388465f" Jul 7 06:09:22.933399 containerd[1443]: time="2025-07-07T06:09:22.933331673Z" level=error msg="StopPodSandbox for \"3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529\" failed" error="failed to destroy network for sandbox \"3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:22.933912 containerd[1443]: time="2025-07-07T06:09:22.933481071Z" level=error msg="StopPodSandbox for \"c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35\" failed" error="failed to destroy network for sandbox \"c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:22.933945 kubelet[2485]: E0707 06:09:22.933585 2485 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" Jul 7 06:09:22.933945 kubelet[2485]: E0707 06:09:22.933641 2485 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35"} Jul 7 06:09:22.933945 kubelet[2485]: E0707 06:09:22.933681 2485 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"058206b3-65d3-47c5-ac92-f4a3b7ef1d3d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:09:22.933945 kubelet[2485]: E0707 06:09:22.933592 2485 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" Jul 7 06:09:22.933945 kubelet[2485]: E0707 06:09:22.933742 2485 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529"} Jul 7 06:09:22.934126 kubelet[2485]: E0707 06:09:22.933771 2485 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2b552d16-8bbf-4c9c-b453-0c942c087079\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:09:22.934126 kubelet[2485]: E0707 06:09:22.933795 2485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2b552d16-8bbf-4c9c-b453-0c942c087079\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d7db7788-k7kxf" podUID="2b552d16-8bbf-4c9c-b453-0c942c087079" Jul 7 06:09:22.934126 kubelet[2485]: E0707 06:09:22.933703 2485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"058206b3-65d3-47c5-ac92-f4a3b7ef1d3d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7fbfd84b85-7qqnq" podUID="058206b3-65d3-47c5-ac92-f4a3b7ef1d3d" Jul 7 06:09:22.941263 containerd[1443]: time="2025-07-07T06:09:22.941210497Z" level=error msg="StopPodSandbox for \"da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97\" failed" error="failed to destroy network for sandbox \"da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:09:22.941619 kubelet[2485]: E0707 06:09:22.941492 2485 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" Jul 7 06:09:22.941619 kubelet[2485]: E0707 06:09:22.941536 2485 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97"} Jul 7 06:09:22.941619 kubelet[2485]: E0707 06:09:22.941567 2485 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6a09f92b-f03b-46c8-9b26-0233f582bf66\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:09:22.941619 kubelet[2485]: E0707 06:09:22.941591 2485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6a09f92b-f03b-46c8-9b26-0233f582bf66\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-m7xfp" podUID="6a09f92b-f03b-46c8-9b26-0233f582bf66" Jul 7 06:09:24.740707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount661088056.mount: Deactivated successfully. Jul 7 06:09:25.012162 containerd[1443]: time="2025-07-07T06:09:25.012113831Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:25.013051 containerd[1443]: time="2025-07-07T06:09:25.012885008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 7 06:09:25.013840 containerd[1443]: time="2025-07-07T06:09:25.013779974Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:25.015633 containerd[1443]: time="2025-07-07T06:09:25.015600272Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:25.016987 containerd[1443]: time="2025-07-07T06:09:25.016930258Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 3.227513709s" Jul 7 06:09:25.016987 containerd[1443]: time="2025-07-07T06:09:25.016974388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 7 06:09:25.024071 containerd[1443]: time="2025-07-07T06:09:25.024004924Z" level=info msg="CreateContainer within sandbox \"de210278b0316ef2d18ebfcb89ef6fb2d46d43754773bd69efb4c118eb57a6ec\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 06:09:25.039731 containerd[1443]: time="2025-07-07T06:09:25.039690728Z" level=info msg="CreateContainer within sandbox \"de210278b0316ef2d18ebfcb89ef6fb2d46d43754773bd69efb4c118eb57a6ec\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f6305fe0dbf7d5fa89ff6fb7053f9a6426cd269c98eb61591d4191004a98cb32\"" Jul 7 06:09:25.040426 containerd[1443]: time="2025-07-07T06:09:25.040384608Z" level=info msg="StartContainer for \"f6305fe0dbf7d5fa89ff6fb7053f9a6426cd269c98eb61591d4191004a98cb32\"" Jul 7 06:09:25.092153 systemd[1]: Started cri-containerd-f6305fe0dbf7d5fa89ff6fb7053f9a6426cd269c98eb61591d4191004a98cb32.scope - libcontainer container f6305fe0dbf7d5fa89ff6fb7053f9a6426cd269c98eb61591d4191004a98cb32. Jul 7 06:09:25.121536 containerd[1443]: time="2025-07-07T06:09:25.121476523Z" level=info msg="StartContainer for \"f6305fe0dbf7d5fa89ff6fb7053f9a6426cd269c98eb61591d4191004a98cb32\" returns successfully" Jul 7 06:09:25.306550 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 06:09:25.306811 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 06:09:25.396310 containerd[1443]: time="2025-07-07T06:09:25.396265870Z" level=info msg="StopPodSandbox for \"26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f\"" Jul 7 06:09:25.594912 containerd[1443]: 2025-07-07 06:09:25.497 [INFO][3899] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" Jul 7 06:09:25.594912 containerd[1443]: 2025-07-07 06:09:25.497 [INFO][3899] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" iface="eth0" netns="/var/run/netns/cni-a2dec8a1-93b0-4b4e-c873-782c47361c95" Jul 7 06:09:25.594912 containerd[1443]: 2025-07-07 06:09:25.498 [INFO][3899] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" iface="eth0" netns="/var/run/netns/cni-a2dec8a1-93b0-4b4e-c873-782c47361c95" Jul 7 06:09:25.594912 containerd[1443]: 2025-07-07 06:09:25.499 [INFO][3899] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" iface="eth0" netns="/var/run/netns/cni-a2dec8a1-93b0-4b4e-c873-782c47361c95" Jul 7 06:09:25.594912 containerd[1443]: 2025-07-07 06:09:25.499 [INFO][3899] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" Jul 7 06:09:25.594912 containerd[1443]: 2025-07-07 06:09:25.500 [INFO][3899] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" Jul 7 06:09:25.594912 containerd[1443]: 2025-07-07 06:09:25.578 [INFO][3910] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" HandleID="k8s-pod-network.26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" Workload="localhost-k8s-whisker--54c9c5d6b7--8slwf-eth0" Jul 7 06:09:25.594912 containerd[1443]: 2025-07-07 06:09:25.578 [INFO][3910] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:25.594912 containerd[1443]: 2025-07-07 06:09:25.578 [INFO][3910] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:25.594912 containerd[1443]: 2025-07-07 06:09:25.587 [WARNING][3910] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" HandleID="k8s-pod-network.26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" Workload="localhost-k8s-whisker--54c9c5d6b7--8slwf-eth0" Jul 7 06:09:25.594912 containerd[1443]: 2025-07-07 06:09:25.587 [INFO][3910] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" HandleID="k8s-pod-network.26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" Workload="localhost-k8s-whisker--54c9c5d6b7--8slwf-eth0" Jul 7 06:09:25.594912 containerd[1443]: 2025-07-07 06:09:25.590 [INFO][3910] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:25.594912 containerd[1443]: 2025-07-07 06:09:25.593 [INFO][3899] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" Jul 7 06:09:25.597151 containerd[1443]: time="2025-07-07T06:09:25.595128410Z" level=info msg="TearDown network for sandbox \"26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f\" successfully" Jul 7 06:09:25.597151 containerd[1443]: time="2025-07-07T06:09:25.595173420Z" level=info msg="StopPodSandbox for \"26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f\" returns successfully" Jul 7 06:09:25.653317 kubelet[2485]: I0707 06:09:25.653194 2485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4s4g\" (UniqueName: \"kubernetes.io/projected/b4655dce-563f-40a1-900f-1c03e1a27866-kube-api-access-n4s4g\") pod \"b4655dce-563f-40a1-900f-1c03e1a27866\" (UID: \"b4655dce-563f-40a1-900f-1c03e1a27866\") " Jul 7 06:09:25.653317 kubelet[2485]: I0707 06:09:25.653239 2485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4655dce-563f-40a1-900f-1c03e1a27866-whisker-ca-bundle\") pod \"b4655dce-563f-40a1-900f-1c03e1a27866\" (UID: \"b4655dce-563f-40a1-900f-1c03e1a27866\") " Jul 7 06:09:25.653317 kubelet[2485]: I0707 06:09:25.653277 2485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b4655dce-563f-40a1-900f-1c03e1a27866-whisker-backend-key-pair\") pod \"b4655dce-563f-40a1-900f-1c03e1a27866\" (UID: \"b4655dce-563f-40a1-900f-1c03e1a27866\") " Jul 7 06:09:25.659646 kubelet[2485]: I0707 06:09:25.659601 2485 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4655dce-563f-40a1-900f-1c03e1a27866-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b4655dce-563f-40a1-900f-1c03e1a27866" (UID: "b4655dce-563f-40a1-900f-1c03e1a27866"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 06:09:25.662924 kubelet[2485]: I0707 06:09:25.662887 2485 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4655dce-563f-40a1-900f-1c03e1a27866-kube-api-access-n4s4g" (OuterVolumeSpecName: "kube-api-access-n4s4g") pod "b4655dce-563f-40a1-900f-1c03e1a27866" (UID: "b4655dce-563f-40a1-900f-1c03e1a27866"). InnerVolumeSpecName "kube-api-access-n4s4g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 06:09:25.667722 kubelet[2485]: I0707 06:09:25.667696 2485 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4655dce-563f-40a1-900f-1c03e1a27866-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b4655dce-563f-40a1-900f-1c03e1a27866" (UID: "b4655dce-563f-40a1-900f-1c03e1a27866"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 06:09:25.741017 systemd[1]: run-netns-cni\x2da2dec8a1\x2d93b0\x2d4b4e\x2dc873\x2d782c47361c95.mount: Deactivated successfully. Jul 7 06:09:25.741183 systemd[1]: var-lib-kubelet-pods-b4655dce\x2d563f\x2d40a1\x2d900f\x2d1c03e1a27866-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dn4s4g.mount: Deactivated successfully. Jul 7 06:09:25.741251 systemd[1]: var-lib-kubelet-pods-b4655dce\x2d563f\x2d40a1\x2d900f\x2d1c03e1a27866-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 06:09:25.754295 kubelet[2485]: I0707 06:09:25.754263 2485 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4655dce-563f-40a1-900f-1c03e1a27866-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 7 06:09:25.754295 kubelet[2485]: I0707 06:09:25.754289 2485 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n4s4g\" (UniqueName: \"kubernetes.io/projected/b4655dce-563f-40a1-900f-1c03e1a27866-kube-api-access-n4s4g\") on node \"localhost\" DevicePath \"\"" Jul 7 06:09:25.754295 kubelet[2485]: I0707 06:09:25.754299 2485 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b4655dce-563f-40a1-900f-1c03e1a27866-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 7 06:09:25.852872 systemd[1]: Removed slice kubepods-besteffort-podb4655dce_563f_40a1_900f_1c03e1a27866.slice - libcontainer container kubepods-besteffort-podb4655dce_563f_40a1_900f_1c03e1a27866.slice. Jul 7 06:09:25.866934 kubelet[2485]: I0707 06:09:25.866558 2485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-l2zc6" podStartSLOduration=1.773561476 podStartE2EDuration="11.86653674s" podCreationTimestamp="2025-07-07 06:09:14 +0000 UTC" firstStartedPulling="2025-07-07 06:09:14.924564494 +0000 UTC m=+19.387419230" lastFinishedPulling="2025-07-07 06:09:25.017539758 +0000 UTC m=+29.480394494" observedRunningTime="2025-07-07 06:09:25.865585802 +0000 UTC m=+30.328440578" watchObservedRunningTime="2025-07-07 06:09:25.86653674 +0000 UTC m=+30.329391476" Jul 7 06:09:25.916636 systemd[1]: Created slice kubepods-besteffort-pod7580a4f7_8ba1_4f7c_b769_e1986b420afd.slice - libcontainer container kubepods-besteffort-pod7580a4f7_8ba1_4f7c_b769_e1986b420afd.slice. Jul 7 06:09:25.955836 kubelet[2485]: I0707 06:09:25.955527 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8ngm\" (UniqueName: \"kubernetes.io/projected/7580a4f7-8ba1-4f7c-b769-e1986b420afd-kube-api-access-g8ngm\") pod \"whisker-5cd454f589-gkc98\" (UID: \"7580a4f7-8ba1-4f7c-b769-e1986b420afd\") " pod="calico-system/whisker-5cd454f589-gkc98" Jul 7 06:09:25.955836 kubelet[2485]: I0707 06:09:25.955583 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7580a4f7-8ba1-4f7c-b769-e1986b420afd-whisker-backend-key-pair\") pod \"whisker-5cd454f589-gkc98\" (UID: \"7580a4f7-8ba1-4f7c-b769-e1986b420afd\") " pod="calico-system/whisker-5cd454f589-gkc98" Jul 7 06:09:25.955836 kubelet[2485]: I0707 06:09:25.955672 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7580a4f7-8ba1-4f7c-b769-e1986b420afd-whisker-ca-bundle\") pod \"whisker-5cd454f589-gkc98\" (UID: \"7580a4f7-8ba1-4f7c-b769-e1986b420afd\") " pod="calico-system/whisker-5cd454f589-gkc98" Jul 7 06:09:26.224397 containerd[1443]: time="2025-07-07T06:09:26.224294185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cd454f589-gkc98,Uid:7580a4f7-8ba1-4f7c-b769-e1986b420afd,Namespace:calico-system,Attempt:0,}" Jul 7 06:09:26.340423 systemd-networkd[1386]: calia6f22213081: Link UP Jul 7 06:09:26.340615 systemd-networkd[1386]: calia6f22213081: Gained carrier Jul 7 06:09:26.354092 containerd[1443]: 2025-07-07 06:09:26.265 [INFO][3935] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:09:26.354092 containerd[1443]: 2025-07-07 06:09:26.279 [INFO][3935] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5cd454f589--gkc98-eth0 whisker-5cd454f589- calico-system 7580a4f7-8ba1-4f7c-b769-e1986b420afd 906 0 2025-07-07 06:09:25 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5cd454f589 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5cd454f589-gkc98 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia6f22213081 [] [] }} ContainerID="59ff84349107d5ad2de1dcfc751ef656a3c1e9d7720b795ac6bfede54011002d" Namespace="calico-system" Pod="whisker-5cd454f589-gkc98" WorkloadEndpoint="localhost-k8s-whisker--5cd454f589--gkc98-" Jul 7 06:09:26.354092 containerd[1443]: 2025-07-07 06:09:26.279 [INFO][3935] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="59ff84349107d5ad2de1dcfc751ef656a3c1e9d7720b795ac6bfede54011002d" Namespace="calico-system" Pod="whisker-5cd454f589-gkc98" WorkloadEndpoint="localhost-k8s-whisker--5cd454f589--gkc98-eth0" Jul 7 06:09:26.354092 containerd[1443]: 2025-07-07 06:09:26.300 [INFO][3948] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="59ff84349107d5ad2de1dcfc751ef656a3c1e9d7720b795ac6bfede54011002d" HandleID="k8s-pod-network.59ff84349107d5ad2de1dcfc751ef656a3c1e9d7720b795ac6bfede54011002d" Workload="localhost-k8s-whisker--5cd454f589--gkc98-eth0" Jul 7 06:09:26.354092 containerd[1443]: 2025-07-07 06:09:26.300 [INFO][3948] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="59ff84349107d5ad2de1dcfc751ef656a3c1e9d7720b795ac6bfede54011002d" HandleID="k8s-pod-network.59ff84349107d5ad2de1dcfc751ef656a3c1e9d7720b795ac6bfede54011002d" Workload="localhost-k8s-whisker--5cd454f589--gkc98-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137650), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5cd454f589-gkc98", "timestamp":"2025-07-07 06:09:26.300487451 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:09:26.354092 containerd[1443]: 2025-07-07 06:09:26.300 [INFO][3948] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:26.354092 containerd[1443]: 2025-07-07 06:09:26.300 [INFO][3948] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:26.354092 containerd[1443]: 2025-07-07 06:09:26.300 [INFO][3948] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:09:26.354092 containerd[1443]: 2025-07-07 06:09:26.310 [INFO][3948] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.59ff84349107d5ad2de1dcfc751ef656a3c1e9d7720b795ac6bfede54011002d" host="localhost" Jul 7 06:09:26.354092 containerd[1443]: 2025-07-07 06:09:26.315 [INFO][3948] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:09:26.354092 containerd[1443]: 2025-07-07 06:09:26.319 [INFO][3948] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:09:26.354092 containerd[1443]: 2025-07-07 06:09:26.320 [INFO][3948] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:09:26.354092 containerd[1443]: 2025-07-07 06:09:26.322 [INFO][3948] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:09:26.354092 containerd[1443]: 2025-07-07 06:09:26.322 [INFO][3948] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.59ff84349107d5ad2de1dcfc751ef656a3c1e9d7720b795ac6bfede54011002d" host="localhost" Jul 7 06:09:26.354092 containerd[1443]: 2025-07-07 06:09:26.323 [INFO][3948] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.59ff84349107d5ad2de1dcfc751ef656a3c1e9d7720b795ac6bfede54011002d Jul 7 06:09:26.354092 containerd[1443]: 2025-07-07 06:09:26.327 [INFO][3948] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.59ff84349107d5ad2de1dcfc751ef656a3c1e9d7720b795ac6bfede54011002d" host="localhost" Jul 7 06:09:26.354092 containerd[1443]: 2025-07-07 06:09:26.332 [INFO][3948] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.59ff84349107d5ad2de1dcfc751ef656a3c1e9d7720b795ac6bfede54011002d" host="localhost" Jul 7 06:09:26.354092 containerd[1443]: 2025-07-07 06:09:26.332 [INFO][3948] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.59ff84349107d5ad2de1dcfc751ef656a3c1e9d7720b795ac6bfede54011002d" host="localhost" Jul 7 06:09:26.354092 containerd[1443]: 2025-07-07 06:09:26.332 [INFO][3948] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:26.354092 containerd[1443]: 2025-07-07 06:09:26.332 [INFO][3948] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="59ff84349107d5ad2de1dcfc751ef656a3c1e9d7720b795ac6bfede54011002d" HandleID="k8s-pod-network.59ff84349107d5ad2de1dcfc751ef656a3c1e9d7720b795ac6bfede54011002d" Workload="localhost-k8s-whisker--5cd454f589--gkc98-eth0" Jul 7 06:09:26.354768 containerd[1443]: 2025-07-07 06:09:26.334 [INFO][3935] cni-plugin/k8s.go 418: Populated endpoint ContainerID="59ff84349107d5ad2de1dcfc751ef656a3c1e9d7720b795ac6bfede54011002d" Namespace="calico-system" Pod="whisker-5cd454f589-gkc98" WorkloadEndpoint="localhost-k8s-whisker--5cd454f589--gkc98-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5cd454f589--gkc98-eth0", GenerateName:"whisker-5cd454f589-", Namespace:"calico-system", SelfLink:"", UID:"7580a4f7-8ba1-4f7c-b769-e1986b420afd", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5cd454f589", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5cd454f589-gkc98", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia6f22213081", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:26.354768 containerd[1443]: 2025-07-07 06:09:26.334 [INFO][3935] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="59ff84349107d5ad2de1dcfc751ef656a3c1e9d7720b795ac6bfede54011002d" Namespace="calico-system" Pod="whisker-5cd454f589-gkc98" WorkloadEndpoint="localhost-k8s-whisker--5cd454f589--gkc98-eth0" Jul 7 06:09:26.354768 containerd[1443]: 2025-07-07 06:09:26.334 [INFO][3935] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia6f22213081 ContainerID="59ff84349107d5ad2de1dcfc751ef656a3c1e9d7720b795ac6bfede54011002d" Namespace="calico-system" Pod="whisker-5cd454f589-gkc98" WorkloadEndpoint="localhost-k8s-whisker--5cd454f589--gkc98-eth0" Jul 7 06:09:26.354768 containerd[1443]: 2025-07-07 06:09:26.342 [INFO][3935] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="59ff84349107d5ad2de1dcfc751ef656a3c1e9d7720b795ac6bfede54011002d" Namespace="calico-system" Pod="whisker-5cd454f589-gkc98" WorkloadEndpoint="localhost-k8s-whisker--5cd454f589--gkc98-eth0" Jul 7 06:09:26.354768 containerd[1443]: 2025-07-07 06:09:26.343 [INFO][3935] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="59ff84349107d5ad2de1dcfc751ef656a3c1e9d7720b795ac6bfede54011002d" Namespace="calico-system" Pod="whisker-5cd454f589-gkc98" WorkloadEndpoint="localhost-k8s-whisker--5cd454f589--gkc98-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5cd454f589--gkc98-eth0", GenerateName:"whisker-5cd454f589-", Namespace:"calico-system", SelfLink:"", UID:"7580a4f7-8ba1-4f7c-b769-e1986b420afd", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5cd454f589", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"59ff84349107d5ad2de1dcfc751ef656a3c1e9d7720b795ac6bfede54011002d", Pod:"whisker-5cd454f589-gkc98", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia6f22213081", MAC:"6a:28:d3:2d:bb:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:26.354768 containerd[1443]: 2025-07-07 06:09:26.351 [INFO][3935] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="59ff84349107d5ad2de1dcfc751ef656a3c1e9d7720b795ac6bfede54011002d" Namespace="calico-system" Pod="whisker-5cd454f589-gkc98" WorkloadEndpoint="localhost-k8s-whisker--5cd454f589--gkc98-eth0" Jul 7 06:09:26.367833 containerd[1443]: time="2025-07-07T06:09:26.367718368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:09:26.368388 containerd[1443]: time="2025-07-07T06:09:26.368068605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:09:26.368388 containerd[1443]: time="2025-07-07T06:09:26.368090130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:26.368388 containerd[1443]: time="2025-07-07T06:09:26.368231242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:26.387141 systemd[1]: Started cri-containerd-59ff84349107d5ad2de1dcfc751ef656a3c1e9d7720b795ac6bfede54011002d.scope - libcontainer container 59ff84349107d5ad2de1dcfc751ef656a3c1e9d7720b795ac6bfede54011002d. Jul 7 06:09:26.397064 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:09:26.425091 containerd[1443]: time="2025-07-07T06:09:26.425055490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cd454f589-gkc98,Uid:7580a4f7-8ba1-4f7c-b769-e1986b420afd,Namespace:calico-system,Attempt:0,} returns sandbox id \"59ff84349107d5ad2de1dcfc751ef656a3c1e9d7720b795ac6bfede54011002d\"" Jul 7 06:09:26.426805 containerd[1443]: time="2025-07-07T06:09:26.426708056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 06:09:27.209465 containerd[1443]: time="2025-07-07T06:09:27.209334958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:27.209785 containerd[1443]: time="2025-07-07T06:09:27.209757209Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 7 06:09:27.210757 containerd[1443]: time="2025-07-07T06:09:27.210719015Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:27.213334 containerd[1443]: time="2025-07-07T06:09:27.213297648Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:27.213970 containerd[1443]: time="2025-07-07T06:09:27.213796075Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 787.043889ms" Jul 7 06:09:27.213970 containerd[1443]: time="2025-07-07T06:09:27.213825681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 7 06:09:27.216855 containerd[1443]: time="2025-07-07T06:09:27.216817563Z" level=info msg="CreateContainer within sandbox \"59ff84349107d5ad2de1dcfc751ef656a3c1e9d7720b795ac6bfede54011002d\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 06:09:27.226869 containerd[1443]: time="2025-07-07T06:09:27.226829270Z" level=info msg="CreateContainer within sandbox \"59ff84349107d5ad2de1dcfc751ef656a3c1e9d7720b795ac6bfede54011002d\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"6c6df5cc8e59e111e197ba7349aa46c3111d0afaf353413a723eaf8157075c67\"" Jul 7 06:09:27.227523 containerd[1443]: time="2025-07-07T06:09:27.227445282Z" level=info msg="StartContainer for \"6c6df5cc8e59e111e197ba7349aa46c3111d0afaf353413a723eaf8157075c67\"" Jul 7 06:09:27.251147 systemd[1]: Started cri-containerd-6c6df5cc8e59e111e197ba7349aa46c3111d0afaf353413a723eaf8157075c67.scope - libcontainer container 6c6df5cc8e59e111e197ba7349aa46c3111d0afaf353413a723eaf8157075c67. Jul 7 06:09:27.280259 containerd[1443]: time="2025-07-07T06:09:27.280216319Z" level=info msg="StartContainer for \"6c6df5cc8e59e111e197ba7349aa46c3111d0afaf353413a723eaf8157075c67\" returns successfully" Jul 7 06:09:27.281500 containerd[1443]: time="2025-07-07T06:09:27.281452024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 06:09:27.626951 kubelet[2485]: I0707 06:09:27.626880 2485 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4655dce-563f-40a1-900f-1c03e1a27866" path="/var/lib/kubelet/pods/b4655dce-563f-40a1-900f-1c03e1a27866/volumes" Jul 7 06:09:27.799757 kubelet[2485]: I0707 06:09:27.799719 2485 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:09:27.800254 kubelet[2485]: E0707 06:09:27.800152 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:09:27.855178 kubelet[2485]: E0707 06:09:27.855149 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:09:27.886945 systemd[1]: run-containerd-runc-k8s.io-f6305fe0dbf7d5fa89ff6fb7053f9a6426cd269c98eb61591d4191004a98cb32-runc.K1cT21.mount: Deactivated successfully. Jul 7 06:09:28.081028 kernel: bpftool[4258]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 7 06:09:28.259466 systemd-networkd[1386]: vxlan.calico: Link UP Jul 7 06:09:28.259472 systemd-networkd[1386]: vxlan.calico: Gained carrier Jul 7 06:09:28.336229 systemd-networkd[1386]: calia6f22213081: Gained IPv6LL Jul 7 06:09:28.629589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2660590617.mount: Deactivated successfully. Jul 7 06:09:28.686929 containerd[1443]: time="2025-07-07T06:09:28.686877051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:28.687908 containerd[1443]: time="2025-07-07T06:09:28.687870217Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 7 06:09:28.688700 containerd[1443]: time="2025-07-07T06:09:28.688673023Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:28.690942 containerd[1443]: time="2025-07-07T06:09:28.690910888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:28.699132 containerd[1443]: time="2025-07-07T06:09:28.699095266Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.417507973s" Jul 7 06:09:28.699132 containerd[1443]: time="2025-07-07T06:09:28.699129793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 7 06:09:28.701141 containerd[1443]: time="2025-07-07T06:09:28.701114965Z" level=info msg="CreateContainer within sandbox \"59ff84349107d5ad2de1dcfc751ef656a3c1e9d7720b795ac6bfede54011002d\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 06:09:28.726461 containerd[1443]: time="2025-07-07T06:09:28.726408213Z" level=info msg="CreateContainer within sandbox \"59ff84349107d5ad2de1dcfc751ef656a3c1e9d7720b795ac6bfede54011002d\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"163bf1fcdbad5ca80d29c217a4356acb2d9a75b0c81410c0940eca5c56416099\"" Jul 7 06:09:28.727393 containerd[1443]: time="2025-07-07T06:09:28.727344287Z" level=info msg="StartContainer for \"163bf1fcdbad5ca80d29c217a4356acb2d9a75b0c81410c0940eca5c56416099\"" Jul 7 06:09:28.770151 systemd[1]: Started cri-containerd-163bf1fcdbad5ca80d29c217a4356acb2d9a75b0c81410c0940eca5c56416099.scope - libcontainer container 163bf1fcdbad5ca80d29c217a4356acb2d9a75b0c81410c0940eca5c56416099. Jul 7 06:09:28.801200 containerd[1443]: time="2025-07-07T06:09:28.801151521Z" level=info msg="StartContainer for \"163bf1fcdbad5ca80d29c217a4356acb2d9a75b0c81410c0940eca5c56416099\" returns successfully" Jul 7 06:09:29.872227 systemd-networkd[1386]: vxlan.calico: Gained IPv6LL Jul 7 06:09:33.106855 systemd[1]: Started sshd@7-10.0.0.114:22-10.0.0.1:52528.service - OpenSSH per-connection server daemon (10.0.0.1:52528). Jul 7 06:09:33.151552 sshd[4391]: Accepted publickey for core from 10.0.0.1 port 52528 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:09:33.153128 sshd[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:09:33.157150 systemd-logind[1425]: New session 8 of user core. Jul 7 06:09:33.174129 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 06:09:33.425878 sshd[4391]: pam_unix(sshd:session): session closed for user core Jul 7 06:09:33.429323 systemd[1]: sshd@7-10.0.0.114:22-10.0.0.1:52528.service: Deactivated successfully. Jul 7 06:09:33.431132 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 06:09:33.431808 systemd-logind[1425]: Session 8 logged out. Waiting for processes to exit. Jul 7 06:09:33.432615 systemd-logind[1425]: Removed session 8. Jul 7 06:09:33.624502 containerd[1443]: time="2025-07-07T06:09:33.624438572Z" level=info msg="StopPodSandbox for \"c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a\"" Jul 7 06:09:33.672288 kubelet[2485]: I0707 06:09:33.672220 2485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5cd454f589-gkc98" podStartSLOduration=6.39875495 podStartE2EDuration="8.672198585s" podCreationTimestamp="2025-07-07 06:09:25 +0000 UTC" firstStartedPulling="2025-07-07 06:09:26.426266999 +0000 UTC m=+30.889121735" lastFinishedPulling="2025-07-07 06:09:28.699710634 +0000 UTC m=+33.162565370" observedRunningTime="2025-07-07 06:09:28.871576734 +0000 UTC m=+33.334431470" watchObservedRunningTime="2025-07-07 06:09:33.672198585 +0000 UTC m=+38.135053321" Jul 7 06:09:33.709472 containerd[1443]: 2025-07-07 06:09:33.671 [INFO][4426] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" Jul 7 06:09:33.709472 containerd[1443]: 2025-07-07 06:09:33.671 [INFO][4426] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" iface="eth0" netns="/var/run/netns/cni-c6a2db39-69bc-4202-cbe4-89d0870847b2" Jul 7 06:09:33.709472 containerd[1443]: 2025-07-07 06:09:33.672 [INFO][4426] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" iface="eth0" netns="/var/run/netns/cni-c6a2db39-69bc-4202-cbe4-89d0870847b2" Jul 7 06:09:33.709472 containerd[1443]: 2025-07-07 06:09:33.672 [INFO][4426] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" iface="eth0" netns="/var/run/netns/cni-c6a2db39-69bc-4202-cbe4-89d0870847b2" Jul 7 06:09:33.709472 containerd[1443]: 2025-07-07 06:09:33.672 [INFO][4426] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" Jul 7 06:09:33.709472 containerd[1443]: 2025-07-07 06:09:33.672 [INFO][4426] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" Jul 7 06:09:33.709472 containerd[1443]: 2025-07-07 06:09:33.695 [INFO][4434] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" HandleID="k8s-pod-network.c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" Workload="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" Jul 7 06:09:33.709472 containerd[1443]: 2025-07-07 06:09:33.695 [INFO][4434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:33.709472 containerd[1443]: 2025-07-07 06:09:33.695 [INFO][4434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:33.709472 containerd[1443]: 2025-07-07 06:09:33.704 [WARNING][4434] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" HandleID="k8s-pod-network.c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" Workload="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" Jul 7 06:09:33.709472 containerd[1443]: 2025-07-07 06:09:33.705 [INFO][4434] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" HandleID="k8s-pod-network.c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" Workload="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" Jul 7 06:09:33.709472 containerd[1443]: 2025-07-07 06:09:33.706 [INFO][4434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:33.709472 containerd[1443]: 2025-07-07 06:09:33.707 [INFO][4426] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" Jul 7 06:09:33.710266 containerd[1443]: time="2025-07-07T06:09:33.709855434Z" level=info msg="TearDown network for sandbox \"c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a\" successfully" Jul 7 06:09:33.710266 containerd[1443]: time="2025-07-07T06:09:33.709883479Z" level=info msg="StopPodSandbox for \"c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a\" returns successfully" Jul 7 06:09:33.711020 containerd[1443]: time="2025-07-07T06:09:33.710582564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7db7788-2k4fl,Uid:c16f1b03-a360-4556-a60c-eadfcd16ef1e,Namespace:calico-apiserver,Attempt:1,}" Jul 7 06:09:33.712153 systemd[1]: run-netns-cni\x2dc6a2db39\x2d69bc\x2d4202\x2dcbe4\x2d89d0870847b2.mount: Deactivated successfully. Jul 7 06:09:33.824666 systemd-networkd[1386]: caliacecd82815c: Link UP Jul 7 06:09:33.825011 systemd-networkd[1386]: caliacecd82815c: Gained carrier Jul 7 06:09:33.836481 containerd[1443]: 2025-07-07 06:09:33.765 [INFO][4443] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0 calico-apiserver-5d7db7788- calico-apiserver c16f1b03-a360-4556-a60c-eadfcd16ef1e 981 0 2025-07-07 06:09:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d7db7788 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5d7db7788-2k4fl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliacecd82815c [] [] }} ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" Namespace="calico-apiserver" Pod="calico-apiserver-5d7db7788-2k4fl" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-" Jul 7 06:09:33.836481 containerd[1443]: 2025-07-07 06:09:33.765 [INFO][4443] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" Namespace="calico-apiserver" Pod="calico-apiserver-5d7db7788-2k4fl" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" Jul 7 06:09:33.836481 containerd[1443]: 2025-07-07 06:09:33.789 [INFO][4459] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" HandleID="k8s-pod-network.59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" Workload="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" Jul 7 06:09:33.836481 containerd[1443]: 2025-07-07 06:09:33.789 [INFO][4459] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" HandleID="k8s-pod-network.59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" Workload="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000322140), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5d7db7788-2k4fl", "timestamp":"2025-07-07 06:09:33.78936296 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:09:33.836481 containerd[1443]: 2025-07-07 06:09:33.789 [INFO][4459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:33.836481 containerd[1443]: 2025-07-07 06:09:33.789 [INFO][4459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:33.836481 containerd[1443]: 2025-07-07 06:09:33.789 [INFO][4459] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:09:33.836481 containerd[1443]: 2025-07-07 06:09:33.799 [INFO][4459] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" host="localhost" Jul 7 06:09:33.836481 containerd[1443]: 2025-07-07 06:09:33.803 [INFO][4459] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:09:33.836481 containerd[1443]: 2025-07-07 06:09:33.807 [INFO][4459] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:09:33.836481 containerd[1443]: 2025-07-07 06:09:33.809 [INFO][4459] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:09:33.836481 containerd[1443]: 2025-07-07 06:09:33.811 [INFO][4459] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:09:33.836481 containerd[1443]: 2025-07-07 06:09:33.811 [INFO][4459] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" host="localhost" Jul 7 06:09:33.836481 containerd[1443]: 2025-07-07 06:09:33.812 [INFO][4459] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131 Jul 7 06:09:33.836481 containerd[1443]: 2025-07-07 06:09:33.815 [INFO][4459] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" host="localhost" Jul 7 06:09:33.836481 containerd[1443]: 2025-07-07 06:09:33.820 [INFO][4459] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" host="localhost" Jul 7 06:09:33.836481 containerd[1443]: 2025-07-07 06:09:33.820 [INFO][4459] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" host="localhost" Jul 7 06:09:33.836481 containerd[1443]: 2025-07-07 06:09:33.820 [INFO][4459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:33.836481 containerd[1443]: 2025-07-07 06:09:33.820 [INFO][4459] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" HandleID="k8s-pod-network.59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" Workload="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" Jul 7 06:09:33.837074 containerd[1443]: 2025-07-07 06:09:33.822 [INFO][4443] cni-plugin/k8s.go 418: Populated endpoint ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" Namespace="calico-apiserver" Pod="calico-apiserver-5d7db7788-2k4fl" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0", GenerateName:"calico-apiserver-5d7db7788-", Namespace:"calico-apiserver", SelfLink:"", UID:"c16f1b03-a360-4556-a60c-eadfcd16ef1e", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7db7788", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5d7db7788-2k4fl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliacecd82815c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:33.837074 containerd[1443]: 2025-07-07 06:09:33.823 [INFO][4443] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" Namespace="calico-apiserver" Pod="calico-apiserver-5d7db7788-2k4fl" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" Jul 7 06:09:33.837074 containerd[1443]: 2025-07-07 06:09:33.823 [INFO][4443] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliacecd82815c ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" Namespace="calico-apiserver" Pod="calico-apiserver-5d7db7788-2k4fl" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" Jul 7 06:09:33.837074 containerd[1443]: 2025-07-07 06:09:33.825 [INFO][4443] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" Namespace="calico-apiserver" Pod="calico-apiserver-5d7db7788-2k4fl" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" Jul 7 06:09:33.837074 containerd[1443]: 2025-07-07 06:09:33.825 [INFO][4443] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" Namespace="calico-apiserver" Pod="calico-apiserver-5d7db7788-2k4fl" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0", GenerateName:"calico-apiserver-5d7db7788-", Namespace:"calico-apiserver", SelfLink:"", UID:"c16f1b03-a360-4556-a60c-eadfcd16ef1e", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7db7788", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131", Pod:"calico-apiserver-5d7db7788-2k4fl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliacecd82815c", MAC:"ca:66:c1:46:6b:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:33.837074 containerd[1443]: 2025-07-07 06:09:33.833 [INFO][4443] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" Namespace="calico-apiserver" Pod="calico-apiserver-5d7db7788-2k4fl" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" Jul 7 06:09:33.851504 containerd[1443]: time="2025-07-07T06:09:33.851394444Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:09:33.851504 containerd[1443]: time="2025-07-07T06:09:33.851455014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:09:33.851504 containerd[1443]: time="2025-07-07T06:09:33.851466376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:33.851707 containerd[1443]: time="2025-07-07T06:09:33.851545671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:33.872123 systemd[1]: Started cri-containerd-59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131.scope - libcontainer container 59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131. Jul 7 06:09:33.882869 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:09:33.910400 containerd[1443]: time="2025-07-07T06:09:33.910312531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7db7788-2k4fl,Uid:c16f1b03-a360-4556-a60c-eadfcd16ef1e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131\"" Jul 7 06:09:33.911770 containerd[1443]: time="2025-07-07T06:09:33.911728784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 06:09:34.864411 systemd-networkd[1386]: caliacecd82815c: Gained IPv6LL Jul 7 06:09:35.238325 containerd[1443]: time="2025-07-07T06:09:35.238203773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:35.239492 containerd[1443]: time="2025-07-07T06:09:35.239447544Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 7 06:09:35.240294 containerd[1443]: time="2025-07-07T06:09:35.240261602Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:35.242478 containerd[1443]: time="2025-07-07T06:09:35.242438091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:35.243223 containerd[1443]: time="2025-07-07T06:09:35.243181537Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.331403544s" Jul 7 06:09:35.243272 containerd[1443]: time="2025-07-07T06:09:35.243228305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 7 06:09:35.245571 containerd[1443]: time="2025-07-07T06:09:35.245520173Z" level=info msg="CreateContainer within sandbox \"59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:09:35.257153 containerd[1443]: time="2025-07-07T06:09:35.257088094Z" level=info msg="CreateContainer within sandbox \"59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"28bbc12fb70d04193abdce1a9ce9eb9696fd8a525202e1d80be39ea3b9526fa5\"" Jul 7 06:09:35.260183 containerd[1443]: time="2025-07-07T06:09:35.260009989Z" level=info msg="StartContainer for \"28bbc12fb70d04193abdce1a9ce9eb9696fd8a525202e1d80be39ea3b9526fa5\"" Jul 7 06:09:35.297182 systemd[1]: Started cri-containerd-28bbc12fb70d04193abdce1a9ce9eb9696fd8a525202e1d80be39ea3b9526fa5.scope - libcontainer container 28bbc12fb70d04193abdce1a9ce9eb9696fd8a525202e1d80be39ea3b9526fa5. Jul 7 06:09:35.374523 containerd[1443]: time="2025-07-07T06:09:35.374473993Z" level=info msg="StartContainer for \"28bbc12fb70d04193abdce1a9ce9eb9696fd8a525202e1d80be39ea3b9526fa5\" returns successfully" Jul 7 06:09:35.625805 containerd[1443]: time="2025-07-07T06:09:35.625751348Z" level=info msg="StopPodSandbox for \"0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1\"" Jul 7 06:09:35.626701 containerd[1443]: time="2025-07-07T06:09:35.626662783Z" level=info msg="StopPodSandbox for \"da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97\"" Jul 7 06:09:35.626844 containerd[1443]: time="2025-07-07T06:09:35.626713512Z" level=info msg="StopPodSandbox for \"c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35\"" Jul 7 06:09:35.626977 containerd[1443]: time="2025-07-07T06:09:35.626923627Z" level=info msg="StopPodSandbox for \"a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba\"" Jul 7 06:09:35.767826 containerd[1443]: 2025-07-07 06:09:35.711 [INFO][4607] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" Jul 7 06:09:35.767826 containerd[1443]: 2025-07-07 06:09:35.711 [INFO][4607] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" iface="eth0" netns="/var/run/netns/cni-8f0a73a8-049d-2639-40cf-8afdba071f58" Jul 7 06:09:35.767826 containerd[1443]: 2025-07-07 06:09:35.712 [INFO][4607] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" iface="eth0" netns="/var/run/netns/cni-8f0a73a8-049d-2639-40cf-8afdba071f58" Jul 7 06:09:35.767826 containerd[1443]: 2025-07-07 06:09:35.713 [INFO][4607] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" iface="eth0" netns="/var/run/netns/cni-8f0a73a8-049d-2639-40cf-8afdba071f58" Jul 7 06:09:35.767826 containerd[1443]: 2025-07-07 06:09:35.713 [INFO][4607] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" Jul 7 06:09:35.767826 containerd[1443]: 2025-07-07 06:09:35.713 [INFO][4607] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" Jul 7 06:09:35.767826 containerd[1443]: 2025-07-07 06:09:35.750 [INFO][4643] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" HandleID="k8s-pod-network.c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" Workload="localhost-k8s-calico--kube--controllers--7fbfd84b85--7qqnq-eth0" Jul 7 06:09:35.767826 containerd[1443]: 2025-07-07 06:09:35.750 [INFO][4643] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:35.767826 containerd[1443]: 2025-07-07 06:09:35.750 [INFO][4643] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:35.767826 containerd[1443]: 2025-07-07 06:09:35.760 [WARNING][4643] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" HandleID="k8s-pod-network.c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" Workload="localhost-k8s-calico--kube--controllers--7fbfd84b85--7qqnq-eth0" Jul 7 06:09:35.767826 containerd[1443]: 2025-07-07 06:09:35.760 [INFO][4643] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" HandleID="k8s-pod-network.c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" Workload="localhost-k8s-calico--kube--controllers--7fbfd84b85--7qqnq-eth0" Jul 7 06:09:35.767826 containerd[1443]: 2025-07-07 06:09:35.762 [INFO][4643] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:35.767826 containerd[1443]: 2025-07-07 06:09:35.764 [INFO][4607] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" Jul 7 06:09:35.768275 containerd[1443]: time="2025-07-07T06:09:35.768055351Z" level=info msg="TearDown network for sandbox \"c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35\" successfully" Jul 7 06:09:35.768275 containerd[1443]: time="2025-07-07T06:09:35.768089837Z" level=info msg="StopPodSandbox for \"c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35\" returns successfully" Jul 7 06:09:35.770139 systemd[1]: run-netns-cni\x2d8f0a73a8\x2d049d\x2d2639\x2d40cf\x2d8afdba071f58.mount: Deactivated successfully. Jul 7 06:09:35.770270 containerd[1443]: time="2025-07-07T06:09:35.770178631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fbfd84b85-7qqnq,Uid:058206b3-65d3-47c5-ac92-f4a3b7ef1d3d,Namespace:calico-system,Attempt:1,}" Jul 7 06:09:35.783315 containerd[1443]: 2025-07-07 06:09:35.711 [INFO][4612] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" Jul 7 06:09:35.783315 containerd[1443]: 2025-07-07 06:09:35.711 [INFO][4612] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" iface="eth0" netns="/var/run/netns/cni-14024ee8-c2b3-f614-8746-bb057f4a33e2" Jul 7 06:09:35.783315 containerd[1443]: 2025-07-07 06:09:35.712 [INFO][4612] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" iface="eth0" netns="/var/run/netns/cni-14024ee8-c2b3-f614-8746-bb057f4a33e2" Jul 7 06:09:35.783315 containerd[1443]: 2025-07-07 06:09:35.713 [INFO][4612] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" iface="eth0" netns="/var/run/netns/cni-14024ee8-c2b3-f614-8746-bb057f4a33e2" Jul 7 06:09:35.783315 containerd[1443]: 2025-07-07 06:09:35.713 [INFO][4612] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" Jul 7 06:09:35.783315 containerd[1443]: 2025-07-07 06:09:35.713 [INFO][4612] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" Jul 7 06:09:35.783315 containerd[1443]: 2025-07-07 06:09:35.764 [INFO][4641] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" HandleID="k8s-pod-network.a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" Workload="localhost-k8s-goldmane--768f4c5c69--lmf8d-eth0" Jul 7 06:09:35.783315 containerd[1443]: 2025-07-07 06:09:35.764 [INFO][4641] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:35.783315 containerd[1443]: 2025-07-07 06:09:35.764 [INFO][4641] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:35.783315 containerd[1443]: 2025-07-07 06:09:35.776 [WARNING][4641] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" HandleID="k8s-pod-network.a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" Workload="localhost-k8s-goldmane--768f4c5c69--lmf8d-eth0" Jul 7 06:09:35.783315 containerd[1443]: 2025-07-07 06:09:35.776 [INFO][4641] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" HandleID="k8s-pod-network.a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" Workload="localhost-k8s-goldmane--768f4c5c69--lmf8d-eth0" Jul 7 06:09:35.783315 containerd[1443]: 2025-07-07 06:09:35.779 [INFO][4641] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:35.783315 containerd[1443]: 2025-07-07 06:09:35.781 [INFO][4612] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" Jul 7 06:09:35.783768 containerd[1443]: time="2025-07-07T06:09:35.783475245Z" level=info msg="TearDown network for sandbox \"a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba\" successfully" Jul 7 06:09:35.783768 containerd[1443]: time="2025-07-07T06:09:35.783501490Z" level=info msg="StopPodSandbox for \"a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba\" returns successfully" Jul 7 06:09:35.784522 containerd[1443]: time="2025-07-07T06:09:35.784200608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-lmf8d,Uid:0f7848e9-158e-4510-8474-f086afb371a7,Namespace:calico-system,Attempt:1,}" Jul 7 06:09:35.811296 containerd[1443]: 2025-07-07 06:09:35.708 [INFO][4608] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" Jul 7 06:09:35.811296 containerd[1443]: 2025-07-07 06:09:35.708 [INFO][4608] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" iface="eth0" netns="/var/run/netns/cni-a61361d2-7bad-d1e9-cfb5-098759ad7730" Jul 7 06:09:35.811296 containerd[1443]: 2025-07-07 06:09:35.709 [INFO][4608] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" iface="eth0" netns="/var/run/netns/cni-a61361d2-7bad-d1e9-cfb5-098759ad7730" Jul 7 06:09:35.811296 containerd[1443]: 2025-07-07 06:09:35.709 [INFO][4608] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" iface="eth0" netns="/var/run/netns/cni-a61361d2-7bad-d1e9-cfb5-098759ad7730" Jul 7 06:09:35.811296 containerd[1443]: 2025-07-07 06:09:35.709 [INFO][4608] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" Jul 7 06:09:35.811296 containerd[1443]: 2025-07-07 06:09:35.709 [INFO][4608] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" Jul 7 06:09:35.811296 containerd[1443]: 2025-07-07 06:09:35.750 [INFO][4639] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" HandleID="k8s-pod-network.0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" Workload="localhost-k8s-coredns--668d6bf9bc--wtdw2-eth0" Jul 7 06:09:35.811296 containerd[1443]: 2025-07-07 06:09:35.750 [INFO][4639] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:35.811296 containerd[1443]: 2025-07-07 06:09:35.778 [INFO][4639] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:35.811296 containerd[1443]: 2025-07-07 06:09:35.794 [WARNING][4639] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" HandleID="k8s-pod-network.0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" Workload="localhost-k8s-coredns--668d6bf9bc--wtdw2-eth0" Jul 7 06:09:35.811296 containerd[1443]: 2025-07-07 06:09:35.794 [INFO][4639] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" HandleID="k8s-pod-network.0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" Workload="localhost-k8s-coredns--668d6bf9bc--wtdw2-eth0" Jul 7 06:09:35.811296 containerd[1443]: 2025-07-07 06:09:35.796 [INFO][4639] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:35.811296 containerd[1443]: 2025-07-07 06:09:35.800 [INFO][4608] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" Jul 7 06:09:35.812085 containerd[1443]: time="2025-07-07T06:09:35.811860577Z" level=info msg="TearDown network for sandbox \"0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1\" successfully" Jul 7 06:09:35.812085 containerd[1443]: time="2025-07-07T06:09:35.811892422Z" level=info msg="StopPodSandbox for \"0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1\" returns successfully" Jul 7 06:09:35.812503 kubelet[2485]: E0707 06:09:35.812210 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:09:35.814334 containerd[1443]: time="2025-07-07T06:09:35.814028664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wtdw2,Uid:acb26600-e422-4fa9-86c9-1e99272ac907,Namespace:kube-system,Attempt:1,}" Jul 7 06:09:35.824158 containerd[1443]: 2025-07-07 06:09:35.717 [INFO][4620] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" Jul 7 06:09:35.824158 containerd[1443]: 2025-07-07 06:09:35.717 [INFO][4620] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" iface="eth0" netns="/var/run/netns/cni-24396137-f9da-c73b-b9f1-2c03b0447448" Jul 7 06:09:35.824158 containerd[1443]: 2025-07-07 06:09:35.718 [INFO][4620] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" iface="eth0" netns="/var/run/netns/cni-24396137-f9da-c73b-b9f1-2c03b0447448" Jul 7 06:09:35.824158 containerd[1443]: 2025-07-07 06:09:35.718 [INFO][4620] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" iface="eth0" netns="/var/run/netns/cni-24396137-f9da-c73b-b9f1-2c03b0447448" Jul 7 06:09:35.824158 containerd[1443]: 2025-07-07 06:09:35.718 [INFO][4620] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" Jul 7 06:09:35.824158 containerd[1443]: 2025-07-07 06:09:35.718 [INFO][4620] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" Jul 7 06:09:35.824158 containerd[1443]: 2025-07-07 06:09:35.760 [INFO][4656] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" HandleID="k8s-pod-network.da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" Workload="localhost-k8s-coredns--668d6bf9bc--m7xfp-eth0" Jul 7 06:09:35.824158 containerd[1443]: 2025-07-07 06:09:35.760 [INFO][4656] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:35.824158 containerd[1443]: 2025-07-07 06:09:35.796 [INFO][4656] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:35.824158 containerd[1443]: 2025-07-07 06:09:35.814 [WARNING][4656] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" HandleID="k8s-pod-network.da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" Workload="localhost-k8s-coredns--668d6bf9bc--m7xfp-eth0" Jul 7 06:09:35.824158 containerd[1443]: 2025-07-07 06:09:35.814 [INFO][4656] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" HandleID="k8s-pod-network.da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" Workload="localhost-k8s-coredns--668d6bf9bc--m7xfp-eth0" Jul 7 06:09:35.824158 containerd[1443]: 2025-07-07 06:09:35.817 [INFO][4656] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:35.824158 containerd[1443]: 2025-07-07 06:09:35.819 [INFO][4620] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" Jul 7 06:09:35.825272 containerd[1443]: time="2025-07-07T06:09:35.825036570Z" level=info msg="TearDown network for sandbox \"da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97\" successfully" Jul 7 06:09:35.825272 containerd[1443]: time="2025-07-07T06:09:35.825066896Z" level=info msg="StopPodSandbox for \"da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97\" returns successfully" Jul 7 06:09:35.825379 kubelet[2485]: E0707 06:09:35.825352 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:09:35.825952 containerd[1443]: time="2025-07-07T06:09:35.825918520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m7xfp,Uid:6a09f92b-f03b-46c8-9b26-0233f582bf66,Namespace:kube-system,Attempt:1,}" Jul 7 06:09:35.887327 kubelet[2485]: I0707 06:09:35.887191 2485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d7db7788-2k4fl" podStartSLOduration=24.554604239 podStartE2EDuration="25.887172704s" podCreationTimestamp="2025-07-07 06:09:10 +0000 UTC" firstStartedPulling="2025-07-07 06:09:33.911457215 +0000 UTC m=+38.374311911" lastFinishedPulling="2025-07-07 06:09:35.24402564 +0000 UTC m=+39.706880376" observedRunningTime="2025-07-07 06:09:35.885408364 +0000 UTC m=+40.348263060" watchObservedRunningTime="2025-07-07 06:09:35.887172704 +0000 UTC m=+40.350027400" Jul 7 06:09:35.967127 systemd-networkd[1386]: cali800d6a288b4: Link UP Jul 7 06:09:35.967919 systemd-networkd[1386]: cali800d6a288b4: Gained carrier Jul 7 06:09:35.995774 containerd[1443]: 2025-07-07 06:09:35.857 [INFO][4671] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7fbfd84b85--7qqnq-eth0 calico-kube-controllers-7fbfd84b85- calico-system 058206b3-65d3-47c5-ac92-f4a3b7ef1d3d 1007 0 2025-07-07 06:09:14 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7fbfd84b85 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7fbfd84b85-7qqnq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali800d6a288b4 [] [] }} ContainerID="107574b8741ff8591c2cf2d0917064b9f3278cd8a11b89029909e8ce3a006ecc" Namespace="calico-system" Pod="calico-kube-controllers-7fbfd84b85-7qqnq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fbfd84b85--7qqnq-" Jul 7 06:09:35.995774 containerd[1443]: 2025-07-07 06:09:35.857 [INFO][4671] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="107574b8741ff8591c2cf2d0917064b9f3278cd8a11b89029909e8ce3a006ecc" Namespace="calico-system" Pod="calico-kube-controllers-7fbfd84b85-7qqnq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fbfd84b85--7qqnq-eth0" Jul 7 06:09:35.995774 containerd[1443]: 2025-07-07 06:09:35.908 [INFO][4723] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="107574b8741ff8591c2cf2d0917064b9f3278cd8a11b89029909e8ce3a006ecc" HandleID="k8s-pod-network.107574b8741ff8591c2cf2d0917064b9f3278cd8a11b89029909e8ce3a006ecc" Workload="localhost-k8s-calico--kube--controllers--7fbfd84b85--7qqnq-eth0" Jul 7 06:09:35.995774 containerd[1443]: 2025-07-07 06:09:35.908 [INFO][4723] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="107574b8741ff8591c2cf2d0917064b9f3278cd8a11b89029909e8ce3a006ecc" HandleID="k8s-pod-network.107574b8741ff8591c2cf2d0917064b9f3278cd8a11b89029909e8ce3a006ecc" Workload="localhost-k8s-calico--kube--controllers--7fbfd84b85--7qqnq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd010), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7fbfd84b85-7qqnq", "timestamp":"2025-07-07 06:09:35.9083865 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:09:35.995774 containerd[1443]: 2025-07-07 06:09:35.908 [INFO][4723] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:35.995774 containerd[1443]: 2025-07-07 06:09:35.908 [INFO][4723] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:35.995774 containerd[1443]: 2025-07-07 06:09:35.908 [INFO][4723] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:09:35.995774 containerd[1443]: 2025-07-07 06:09:35.919 [INFO][4723] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.107574b8741ff8591c2cf2d0917064b9f3278cd8a11b89029909e8ce3a006ecc" host="localhost" Jul 7 06:09:35.995774 containerd[1443]: 2025-07-07 06:09:35.924 [INFO][4723] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:09:35.995774 containerd[1443]: 2025-07-07 06:09:35.930 [INFO][4723] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:09:35.995774 containerd[1443]: 2025-07-07 06:09:35.935 [INFO][4723] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:09:35.995774 containerd[1443]: 2025-07-07 06:09:35.939 [INFO][4723] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:09:35.995774 containerd[1443]: 2025-07-07 06:09:35.939 [INFO][4723] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.107574b8741ff8591c2cf2d0917064b9f3278cd8a11b89029909e8ce3a006ecc" host="localhost" Jul 7 06:09:35.995774 containerd[1443]: 2025-07-07 06:09:35.943 [INFO][4723] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.107574b8741ff8591c2cf2d0917064b9f3278cd8a11b89029909e8ce3a006ecc Jul 7 06:09:35.995774 containerd[1443]: 2025-07-07 06:09:35.948 [INFO][4723] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.107574b8741ff8591c2cf2d0917064b9f3278cd8a11b89029909e8ce3a006ecc" host="localhost" Jul 7 06:09:35.995774 containerd[1443]: 2025-07-07 06:09:35.959 [INFO][4723] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.107574b8741ff8591c2cf2d0917064b9f3278cd8a11b89029909e8ce3a006ecc" host="localhost" Jul 7 06:09:35.995774 containerd[1443]: 2025-07-07 06:09:35.959 [INFO][4723] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.107574b8741ff8591c2cf2d0917064b9f3278cd8a11b89029909e8ce3a006ecc" host="localhost" Jul 7 06:09:35.995774 containerd[1443]: 2025-07-07 06:09:35.959 [INFO][4723] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:35.995774 containerd[1443]: 2025-07-07 06:09:35.959 [INFO][4723] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="107574b8741ff8591c2cf2d0917064b9f3278cd8a11b89029909e8ce3a006ecc" HandleID="k8s-pod-network.107574b8741ff8591c2cf2d0917064b9f3278cd8a11b89029909e8ce3a006ecc" Workload="localhost-k8s-calico--kube--controllers--7fbfd84b85--7qqnq-eth0" Jul 7 06:09:35.997164 containerd[1443]: 2025-07-07 06:09:35.962 [INFO][4671] cni-plugin/k8s.go 418: Populated endpoint ContainerID="107574b8741ff8591c2cf2d0917064b9f3278cd8a11b89029909e8ce3a006ecc" Namespace="calico-system" Pod="calico-kube-controllers-7fbfd84b85-7qqnq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fbfd84b85--7qqnq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7fbfd84b85--7qqnq-eth0", GenerateName:"calico-kube-controllers-7fbfd84b85-", Namespace:"calico-system", SelfLink:"", UID:"058206b3-65d3-47c5-ac92-f4a3b7ef1d3d", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fbfd84b85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7fbfd84b85-7qqnq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali800d6a288b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:35.997164 containerd[1443]: 2025-07-07 06:09:35.963 [INFO][4671] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="107574b8741ff8591c2cf2d0917064b9f3278cd8a11b89029909e8ce3a006ecc" Namespace="calico-system" Pod="calico-kube-controllers-7fbfd84b85-7qqnq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fbfd84b85--7qqnq-eth0" Jul 7 06:09:35.997164 containerd[1443]: 2025-07-07 06:09:35.963 [INFO][4671] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali800d6a288b4 ContainerID="107574b8741ff8591c2cf2d0917064b9f3278cd8a11b89029909e8ce3a006ecc" Namespace="calico-system" Pod="calico-kube-controllers-7fbfd84b85-7qqnq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fbfd84b85--7qqnq-eth0" Jul 7 06:09:35.997164 containerd[1443]: 2025-07-07 06:09:35.968 [INFO][4671] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="107574b8741ff8591c2cf2d0917064b9f3278cd8a11b89029909e8ce3a006ecc" Namespace="calico-system" Pod="calico-kube-controllers-7fbfd84b85-7qqnq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fbfd84b85--7qqnq-eth0" Jul 7 06:09:35.997164 containerd[1443]: 2025-07-07 06:09:35.968 [INFO][4671] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="107574b8741ff8591c2cf2d0917064b9f3278cd8a11b89029909e8ce3a006ecc" Namespace="calico-system" Pod="calico-kube-controllers-7fbfd84b85-7qqnq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fbfd84b85--7qqnq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7fbfd84b85--7qqnq-eth0", GenerateName:"calico-kube-controllers-7fbfd84b85-", Namespace:"calico-system", SelfLink:"", UID:"058206b3-65d3-47c5-ac92-f4a3b7ef1d3d", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fbfd84b85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"107574b8741ff8591c2cf2d0917064b9f3278cd8a11b89029909e8ce3a006ecc", Pod:"calico-kube-controllers-7fbfd84b85-7qqnq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali800d6a288b4", MAC:"06:45:26:0f:13:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:35.997164 containerd[1443]: 2025-07-07 06:09:35.992 [INFO][4671] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="107574b8741ff8591c2cf2d0917064b9f3278cd8a11b89029909e8ce3a006ecc" Namespace="calico-system" Pod="calico-kube-controllers-7fbfd84b85-7qqnq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fbfd84b85--7qqnq-eth0" Jul 7 06:09:36.013834 containerd[1443]: time="2025-07-07T06:09:36.013465580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:09:36.013834 containerd[1443]: time="2025-07-07T06:09:36.013553915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:09:36.013834 containerd[1443]: time="2025-07-07T06:09:36.013569037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:36.013834 containerd[1443]: time="2025-07-07T06:09:36.013704020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:36.035203 systemd[1]: Started cri-containerd-107574b8741ff8591c2cf2d0917064b9f3278cd8a11b89029909e8ce3a006ecc.scope - libcontainer container 107574b8741ff8591c2cf2d0917064b9f3278cd8a11b89029909e8ce3a006ecc. Jul 7 06:09:36.056684 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:09:36.056875 systemd-networkd[1386]: caliba6f51d8f7e: Link UP Jul 7 06:09:36.057617 systemd-networkd[1386]: caliba6f51d8f7e: Gained carrier Jul 7 06:09:36.073411 containerd[1443]: 2025-07-07 06:09:35.867 [INFO][4689] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--lmf8d-eth0 goldmane-768f4c5c69- calico-system 0f7848e9-158e-4510-8474-f086afb371a7 1008 0 2025-07-07 06:09:14 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-lmf8d eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliba6f51d8f7e [] [] }} ContainerID="7b954b61f6843cef2bf1de33289df4254535965e4a136b7bdc3520d0184faecc" Namespace="calico-system" Pod="goldmane-768f4c5c69-lmf8d" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lmf8d-" Jul 7 06:09:36.073411 containerd[1443]: 2025-07-07 06:09:35.868 [INFO][4689] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7b954b61f6843cef2bf1de33289df4254535965e4a136b7bdc3520d0184faecc" Namespace="calico-system" Pod="goldmane-768f4c5c69-lmf8d" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lmf8d-eth0" Jul 7 06:09:36.073411 containerd[1443]: 2025-07-07 06:09:35.918 [INFO][4733] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b954b61f6843cef2bf1de33289df4254535965e4a136b7bdc3520d0184faecc" HandleID="k8s-pod-network.7b954b61f6843cef2bf1de33289df4254535965e4a136b7bdc3520d0184faecc" Workload="localhost-k8s-goldmane--768f4c5c69--lmf8d-eth0" Jul 7 06:09:36.073411 containerd[1443]: 2025-07-07 06:09:35.918 [INFO][4733] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7b954b61f6843cef2bf1de33289df4254535965e4a136b7bdc3520d0184faecc" HandleID="k8s-pod-network.7b954b61f6843cef2bf1de33289df4254535965e4a136b7bdc3520d0184faecc" Workload="localhost-k8s-goldmane--768f4c5c69--lmf8d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ce50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-lmf8d", "timestamp":"2025-07-07 06:09:35.918247371 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:09:36.073411 containerd[1443]: 2025-07-07 06:09:35.918 [INFO][4733] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:36.073411 containerd[1443]: 2025-07-07 06:09:35.960 [INFO][4733] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:36.073411 containerd[1443]: 2025-07-07 06:09:35.960 [INFO][4733] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:09:36.073411 containerd[1443]: 2025-07-07 06:09:36.018 [INFO][4733] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7b954b61f6843cef2bf1de33289df4254535965e4a136b7bdc3520d0184faecc" host="localhost" Jul 7 06:09:36.073411 containerd[1443]: 2025-07-07 06:09:36.026 [INFO][4733] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:09:36.073411 containerd[1443]: 2025-07-07 06:09:36.032 [INFO][4733] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:09:36.073411 containerd[1443]: 2025-07-07 06:09:36.034 [INFO][4733] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:09:36.073411 containerd[1443]: 2025-07-07 06:09:36.037 [INFO][4733] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:09:36.073411 containerd[1443]: 2025-07-07 06:09:36.037 [INFO][4733] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7b954b61f6843cef2bf1de33289df4254535965e4a136b7bdc3520d0184faecc" host="localhost" Jul 7 06:09:36.073411 containerd[1443]: 2025-07-07 06:09:36.038 [INFO][4733] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7b954b61f6843cef2bf1de33289df4254535965e4a136b7bdc3520d0184faecc Jul 7 06:09:36.073411 containerd[1443]: 2025-07-07 06:09:36.042 [INFO][4733] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7b954b61f6843cef2bf1de33289df4254535965e4a136b7bdc3520d0184faecc" host="localhost" Jul 7 06:09:36.073411 containerd[1443]: 2025-07-07 06:09:36.048 [INFO][4733] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.7b954b61f6843cef2bf1de33289df4254535965e4a136b7bdc3520d0184faecc" host="localhost" Jul 7 06:09:36.073411 containerd[1443]: 2025-07-07 06:09:36.048 [INFO][4733] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.7b954b61f6843cef2bf1de33289df4254535965e4a136b7bdc3520d0184faecc" host="localhost" Jul 7 06:09:36.073411 containerd[1443]: 2025-07-07 06:09:36.050 [INFO][4733] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:36.073411 containerd[1443]: 2025-07-07 06:09:36.050 [INFO][4733] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="7b954b61f6843cef2bf1de33289df4254535965e4a136b7bdc3520d0184faecc" HandleID="k8s-pod-network.7b954b61f6843cef2bf1de33289df4254535965e4a136b7bdc3520d0184faecc" Workload="localhost-k8s-goldmane--768f4c5c69--lmf8d-eth0" Jul 7 06:09:36.074163 containerd[1443]: 2025-07-07 06:09:36.052 [INFO][4689] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7b954b61f6843cef2bf1de33289df4254535965e4a136b7bdc3520d0184faecc" Namespace="calico-system" Pod="goldmane-768f4c5c69-lmf8d" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lmf8d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--lmf8d-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"0f7848e9-158e-4510-8474-f086afb371a7", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-lmf8d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliba6f51d8f7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:36.074163 containerd[1443]: 2025-07-07 06:09:36.052 [INFO][4689] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="7b954b61f6843cef2bf1de33289df4254535965e4a136b7bdc3520d0184faecc" Namespace="calico-system" Pod="goldmane-768f4c5c69-lmf8d" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lmf8d-eth0" Jul 7 06:09:36.074163 containerd[1443]: 2025-07-07 06:09:36.052 [INFO][4689] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba6f51d8f7e ContainerID="7b954b61f6843cef2bf1de33289df4254535965e4a136b7bdc3520d0184faecc" Namespace="calico-system" Pod="goldmane-768f4c5c69-lmf8d" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lmf8d-eth0" Jul 7 06:09:36.074163 containerd[1443]: 2025-07-07 06:09:36.058 [INFO][4689] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b954b61f6843cef2bf1de33289df4254535965e4a136b7bdc3520d0184faecc" Namespace="calico-system" Pod="goldmane-768f4c5c69-lmf8d" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lmf8d-eth0" Jul 7 06:09:36.074163 containerd[1443]: 2025-07-07 06:09:36.060 [INFO][4689] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7b954b61f6843cef2bf1de33289df4254535965e4a136b7bdc3520d0184faecc" Namespace="calico-system" Pod="goldmane-768f4c5c69-lmf8d" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lmf8d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--lmf8d-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"0f7848e9-158e-4510-8474-f086afb371a7", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7b954b61f6843cef2bf1de33289df4254535965e4a136b7bdc3520d0184faecc", Pod:"goldmane-768f4c5c69-lmf8d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliba6f51d8f7e", MAC:"9e:a3:d0:9b:24:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:36.074163 containerd[1443]: 2025-07-07 06:09:36.069 [INFO][4689] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7b954b61f6843cef2bf1de33289df4254535965e4a136b7bdc3520d0184faecc" Namespace="calico-system" Pod="goldmane-768f4c5c69-lmf8d" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--lmf8d-eth0" Jul 7 06:09:36.091711 containerd[1443]: time="2025-07-07T06:09:36.091424592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:09:36.091711 containerd[1443]: time="2025-07-07T06:09:36.091496323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:09:36.091711 containerd[1443]: time="2025-07-07T06:09:36.091527729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:36.091711 containerd[1443]: time="2025-07-07T06:09:36.091632026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:36.102551 containerd[1443]: time="2025-07-07T06:09:36.102510865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fbfd84b85-7qqnq,Uid:058206b3-65d3-47c5-ac92-f4a3b7ef1d3d,Namespace:calico-system,Attempt:1,} returns sandbox id \"107574b8741ff8591c2cf2d0917064b9f3278cd8a11b89029909e8ce3a006ecc\"" Jul 7 06:09:36.104299 containerd[1443]: time="2025-07-07T06:09:36.104271716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 06:09:36.117130 systemd[1]: Started cri-containerd-7b954b61f6843cef2bf1de33289df4254535965e4a136b7bdc3520d0184faecc.scope - libcontainer container 7b954b61f6843cef2bf1de33289df4254535965e4a136b7bdc3520d0184faecc. Jul 7 06:09:36.130469 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:09:36.156673 containerd[1443]: time="2025-07-07T06:09:36.154810633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-lmf8d,Uid:0f7848e9-158e-4510-8474-f086afb371a7,Namespace:calico-system,Attempt:1,} returns sandbox id \"7b954b61f6843cef2bf1de33289df4254535965e4a136b7bdc3520d0184faecc\"" Jul 7 06:09:36.163098 systemd-networkd[1386]: calidb351f2839a: Link UP Jul 7 06:09:36.163737 systemd-networkd[1386]: calidb351f2839a: Gained carrier Jul 7 06:09:36.183514 containerd[1443]: 2025-07-07 06:09:35.890 [INFO][4697] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--wtdw2-eth0 coredns-668d6bf9bc- kube-system acb26600-e422-4fa9-86c9-1e99272ac907 1006 0 2025-07-07 06:09:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-wtdw2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidb351f2839a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4139863f28ec524ddc8f268f2ad68bbdaa38d96ef3bfa144f85694445f3178a1" Namespace="kube-system" Pod="coredns-668d6bf9bc-wtdw2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wtdw2-" Jul 7 06:09:36.183514 containerd[1443]: 2025-07-07 06:09:35.891 [INFO][4697] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4139863f28ec524ddc8f268f2ad68bbdaa38d96ef3bfa144f85694445f3178a1" Namespace="kube-system" Pod="coredns-668d6bf9bc-wtdw2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wtdw2-eth0" Jul 7 06:09:36.183514 containerd[1443]: 2025-07-07 06:09:35.949 [INFO][4744] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4139863f28ec524ddc8f268f2ad68bbdaa38d96ef3bfa144f85694445f3178a1" HandleID="k8s-pod-network.4139863f28ec524ddc8f268f2ad68bbdaa38d96ef3bfa144f85694445f3178a1" Workload="localhost-k8s-coredns--668d6bf9bc--wtdw2-eth0" Jul 7 06:09:36.183514 containerd[1443]: 2025-07-07 06:09:35.950 [INFO][4744] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4139863f28ec524ddc8f268f2ad68bbdaa38d96ef3bfa144f85694445f3178a1" HandleID="k8s-pod-network.4139863f28ec524ddc8f268f2ad68bbdaa38d96ef3bfa144f85694445f3178a1" Workload="localhost-k8s-coredns--668d6bf9bc--wtdw2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137740), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-wtdw2", "timestamp":"2025-07-07 06:09:35.949925221 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:09:36.183514 containerd[1443]: 2025-07-07 06:09:35.950 [INFO][4744] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:36.183514 containerd[1443]: 2025-07-07 06:09:36.050 [INFO][4744] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:36.183514 containerd[1443]: 2025-07-07 06:09:36.050 [INFO][4744] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:09:36.183514 containerd[1443]: 2025-07-07 06:09:36.120 [INFO][4744] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4139863f28ec524ddc8f268f2ad68bbdaa38d96ef3bfa144f85694445f3178a1" host="localhost" Jul 7 06:09:36.183514 containerd[1443]: 2025-07-07 06:09:36.127 [INFO][4744] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:09:36.183514 containerd[1443]: 2025-07-07 06:09:36.134 [INFO][4744] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:09:36.183514 containerd[1443]: 2025-07-07 06:09:36.138 [INFO][4744] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:09:36.183514 containerd[1443]: 2025-07-07 06:09:36.140 [INFO][4744] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:09:36.183514 containerd[1443]: 2025-07-07 06:09:36.140 [INFO][4744] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4139863f28ec524ddc8f268f2ad68bbdaa38d96ef3bfa144f85694445f3178a1" host="localhost" Jul 7 06:09:36.183514 containerd[1443]: 2025-07-07 06:09:36.142 [INFO][4744] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4139863f28ec524ddc8f268f2ad68bbdaa38d96ef3bfa144f85694445f3178a1 Jul 7 06:09:36.183514 containerd[1443]: 2025-07-07 06:09:36.146 [INFO][4744] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4139863f28ec524ddc8f268f2ad68bbdaa38d96ef3bfa144f85694445f3178a1" host="localhost" Jul 7 06:09:36.183514 containerd[1443]: 2025-07-07 06:09:36.153 [INFO][4744] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.4139863f28ec524ddc8f268f2ad68bbdaa38d96ef3bfa144f85694445f3178a1" host="localhost" Jul 7 06:09:36.183514 containerd[1443]: 2025-07-07 06:09:36.153 [INFO][4744] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.4139863f28ec524ddc8f268f2ad68bbdaa38d96ef3bfa144f85694445f3178a1" host="localhost" Jul 7 06:09:36.183514 containerd[1443]: 2025-07-07 06:09:36.153 [INFO][4744] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:36.183514 containerd[1443]: 2025-07-07 06:09:36.153 [INFO][4744] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="4139863f28ec524ddc8f268f2ad68bbdaa38d96ef3bfa144f85694445f3178a1" HandleID="k8s-pod-network.4139863f28ec524ddc8f268f2ad68bbdaa38d96ef3bfa144f85694445f3178a1" Workload="localhost-k8s-coredns--668d6bf9bc--wtdw2-eth0" Jul 7 06:09:36.184346 containerd[1443]: 2025-07-07 06:09:36.156 [INFO][4697] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4139863f28ec524ddc8f268f2ad68bbdaa38d96ef3bfa144f85694445f3178a1" Namespace="kube-system" Pod="coredns-668d6bf9bc-wtdw2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wtdw2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wtdw2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"acb26600-e422-4fa9-86c9-1e99272ac907", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-wtdw2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidb351f2839a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:36.184346 containerd[1443]: 2025-07-07 06:09:36.156 [INFO][4697] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="4139863f28ec524ddc8f268f2ad68bbdaa38d96ef3bfa144f85694445f3178a1" Namespace="kube-system" Pod="coredns-668d6bf9bc-wtdw2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wtdw2-eth0" Jul 7 06:09:36.184346 containerd[1443]: 2025-07-07 06:09:36.156 [INFO][4697] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb351f2839a ContainerID="4139863f28ec524ddc8f268f2ad68bbdaa38d96ef3bfa144f85694445f3178a1" Namespace="kube-system" Pod="coredns-668d6bf9bc-wtdw2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wtdw2-eth0" Jul 7 06:09:36.184346 containerd[1443]: 2025-07-07 06:09:36.165 [INFO][4697] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4139863f28ec524ddc8f268f2ad68bbdaa38d96ef3bfa144f85694445f3178a1" Namespace="kube-system" Pod="coredns-668d6bf9bc-wtdw2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wtdw2-eth0" Jul 7 06:09:36.184346 containerd[1443]: 2025-07-07 06:09:36.167 [INFO][4697] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4139863f28ec524ddc8f268f2ad68bbdaa38d96ef3bfa144f85694445f3178a1" Namespace="kube-system" Pod="coredns-668d6bf9bc-wtdw2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wtdw2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wtdw2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"acb26600-e422-4fa9-86c9-1e99272ac907", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4139863f28ec524ddc8f268f2ad68bbdaa38d96ef3bfa144f85694445f3178a1", Pod:"coredns-668d6bf9bc-wtdw2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidb351f2839a", MAC:"fa:5f:ea:20:29:0c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:36.184346 containerd[1443]: 2025-07-07 06:09:36.179 [INFO][4697] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4139863f28ec524ddc8f268f2ad68bbdaa38d96ef3bfa144f85694445f3178a1" Namespace="kube-system" Pod="coredns-668d6bf9bc-wtdw2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--wtdw2-eth0" Jul 7 06:09:36.199384 containerd[1443]: time="2025-07-07T06:09:36.199288028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:09:36.199384 containerd[1443]: time="2025-07-07T06:09:36.199349838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:09:36.199384 containerd[1443]: time="2025-07-07T06:09:36.199360920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:36.199632 containerd[1443]: time="2025-07-07T06:09:36.199434812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:36.219178 systemd[1]: Started cri-containerd-4139863f28ec524ddc8f268f2ad68bbdaa38d96ef3bfa144f85694445f3178a1.scope - libcontainer container 4139863f28ec524ddc8f268f2ad68bbdaa38d96ef3bfa144f85694445f3178a1. Jul 7 06:09:36.229714 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:09:36.247973 containerd[1443]: time="2025-07-07T06:09:36.247922830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wtdw2,Uid:acb26600-e422-4fa9-86c9-1e99272ac907,Namespace:kube-system,Attempt:1,} returns sandbox id \"4139863f28ec524ddc8f268f2ad68bbdaa38d96ef3bfa144f85694445f3178a1\"" Jul 7 06:09:36.249163 kubelet[2485]: E0707 06:09:36.248663 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:09:36.256452 containerd[1443]: time="2025-07-07T06:09:36.256113264Z" level=info msg="CreateContainer within sandbox \"4139863f28ec524ddc8f268f2ad68bbdaa38d96ef3bfa144f85694445f3178a1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:09:36.267815 systemd[1]: run-netns-cni\x2d14024ee8\x2dc2b3\x2df614\x2d8746\x2dbb057f4a33e2.mount: Deactivated successfully. Jul 7 06:09:36.268105 systemd[1]: run-netns-cni\x2da61361d2\x2d7bad\x2dd1e9\x2dcfb5\x2d098759ad7730.mount: Deactivated successfully. Jul 7 06:09:36.268482 systemd[1]: run-netns-cni\x2d24396137\x2df9da\x2dc73b\x2db9f1\x2d2c03b0447448.mount: Deactivated successfully. Jul 7 06:09:36.270339 systemd-networkd[1386]: cali89bd0bda239: Link UP Jul 7 06:09:36.274131 systemd-networkd[1386]: cali89bd0bda239: Gained carrier Jul 7 06:09:36.276196 containerd[1443]: time="2025-07-07T06:09:36.276146897Z" level=info msg="CreateContainer within sandbox \"4139863f28ec524ddc8f268f2ad68bbdaa38d96ef3bfa144f85694445f3178a1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dba2299d38a318825548af1f58301d0e182a991d7e575d4c9aa0d1605b619918\"" Jul 7 06:09:36.278031 containerd[1443]: time="2025-07-07T06:09:36.277394383Z" level=info msg="StartContainer for \"dba2299d38a318825548af1f58301d0e182a991d7e575d4c9aa0d1605b619918\"" Jul 7 06:09:36.290742 containerd[1443]: 2025-07-07 06:09:35.912 [INFO][4707] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--m7xfp-eth0 coredns-668d6bf9bc- kube-system 6a09f92b-f03b-46c8-9b26-0233f582bf66 1009 0 2025-07-07 06:09:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-m7xfp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali89bd0bda239 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7fa77294d4de98937978ce54ff6af61db8e4b4f4331a66263dcc82f7ed97c5b5" Namespace="kube-system" Pod="coredns-668d6bf9bc-m7xfp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m7xfp-" Jul 7 06:09:36.290742 containerd[1443]: 2025-07-07 06:09:35.913 [INFO][4707] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7fa77294d4de98937978ce54ff6af61db8e4b4f4331a66263dcc82f7ed97c5b5" Namespace="kube-system" Pod="coredns-668d6bf9bc-m7xfp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m7xfp-eth0" Jul 7 06:09:36.290742 containerd[1443]: 2025-07-07 06:09:35.950 [INFO][4752] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7fa77294d4de98937978ce54ff6af61db8e4b4f4331a66263dcc82f7ed97c5b5" HandleID="k8s-pod-network.7fa77294d4de98937978ce54ff6af61db8e4b4f4331a66263dcc82f7ed97c5b5" Workload="localhost-k8s-coredns--668d6bf9bc--m7xfp-eth0" Jul 7 06:09:36.290742 containerd[1443]: 2025-07-07 06:09:35.950 [INFO][4752] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7fa77294d4de98937978ce54ff6af61db8e4b4f4331a66263dcc82f7ed97c5b5" HandleID="k8s-pod-network.7fa77294d4de98937978ce54ff6af61db8e4b4f4331a66263dcc82f7ed97c5b5" Workload="localhost-k8s-coredns--668d6bf9bc--m7xfp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323490), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-m7xfp", "timestamp":"2025-07-07 06:09:35.950539205 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:09:36.290742 containerd[1443]: 2025-07-07 06:09:35.950 [INFO][4752] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:36.290742 containerd[1443]: 2025-07-07 06:09:36.153 [INFO][4752] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:36.290742 containerd[1443]: 2025-07-07 06:09:36.153 [INFO][4752] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:09:36.290742 containerd[1443]: 2025-07-07 06:09:36.220 [INFO][4752] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7fa77294d4de98937978ce54ff6af61db8e4b4f4331a66263dcc82f7ed97c5b5" host="localhost" Jul 7 06:09:36.290742 containerd[1443]: 2025-07-07 06:09:36.229 [INFO][4752] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:09:36.290742 containerd[1443]: 2025-07-07 06:09:36.235 [INFO][4752] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:09:36.290742 containerd[1443]: 2025-07-07 06:09:36.239 [INFO][4752] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:09:36.290742 containerd[1443]: 2025-07-07 06:09:36.242 [INFO][4752] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:09:36.290742 containerd[1443]: 2025-07-07 06:09:36.244 [INFO][4752] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7fa77294d4de98937978ce54ff6af61db8e4b4f4331a66263dcc82f7ed97c5b5" host="localhost" Jul 7 06:09:36.290742 containerd[1443]: 2025-07-07 06:09:36.245 [INFO][4752] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7fa77294d4de98937978ce54ff6af61db8e4b4f4331a66263dcc82f7ed97c5b5 Jul 7 06:09:36.290742 containerd[1443]: 2025-07-07 06:09:36.251 [INFO][4752] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7fa77294d4de98937978ce54ff6af61db8e4b4f4331a66263dcc82f7ed97c5b5" host="localhost" Jul 7 06:09:36.290742 containerd[1443]: 2025-07-07 06:09:36.261 [INFO][4752] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.7fa77294d4de98937978ce54ff6af61db8e4b4f4331a66263dcc82f7ed97c5b5" host="localhost" Jul 7 06:09:36.290742 containerd[1443]: 2025-07-07 06:09:36.261 [INFO][4752] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.7fa77294d4de98937978ce54ff6af61db8e4b4f4331a66263dcc82f7ed97c5b5" host="localhost" Jul 7 06:09:36.290742 containerd[1443]: 2025-07-07 06:09:36.261 [INFO][4752] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:36.290742 containerd[1443]: 2025-07-07 06:09:36.262 [INFO][4752] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="7fa77294d4de98937978ce54ff6af61db8e4b4f4331a66263dcc82f7ed97c5b5" HandleID="k8s-pod-network.7fa77294d4de98937978ce54ff6af61db8e4b4f4331a66263dcc82f7ed97c5b5" Workload="localhost-k8s-coredns--668d6bf9bc--m7xfp-eth0" Jul 7 06:09:36.291306 containerd[1443]: 2025-07-07 06:09:36.265 [INFO][4707] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7fa77294d4de98937978ce54ff6af61db8e4b4f4331a66263dcc82f7ed97c5b5" Namespace="kube-system" Pod="coredns-668d6bf9bc-m7xfp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m7xfp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--m7xfp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6a09f92b-f03b-46c8-9b26-0233f582bf66", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-m7xfp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali89bd0bda239", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:36.291306 containerd[1443]: 2025-07-07 06:09:36.265 [INFO][4707] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="7fa77294d4de98937978ce54ff6af61db8e4b4f4331a66263dcc82f7ed97c5b5" Namespace="kube-system" Pod="coredns-668d6bf9bc-m7xfp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m7xfp-eth0" Jul 7 06:09:36.291306 containerd[1443]: 2025-07-07 06:09:36.265 [INFO][4707] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89bd0bda239 ContainerID="7fa77294d4de98937978ce54ff6af61db8e4b4f4331a66263dcc82f7ed97c5b5" Namespace="kube-system" Pod="coredns-668d6bf9bc-m7xfp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m7xfp-eth0" Jul 7 06:09:36.291306 containerd[1443]: 2025-07-07 06:09:36.270 [INFO][4707] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7fa77294d4de98937978ce54ff6af61db8e4b4f4331a66263dcc82f7ed97c5b5" Namespace="kube-system" Pod="coredns-668d6bf9bc-m7xfp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m7xfp-eth0" Jul 7 06:09:36.291306 containerd[1443]: 2025-07-07 06:09:36.272 [INFO][4707] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7fa77294d4de98937978ce54ff6af61db8e4b4f4331a66263dcc82f7ed97c5b5" Namespace="kube-system" Pod="coredns-668d6bf9bc-m7xfp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m7xfp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--m7xfp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6a09f92b-f03b-46c8-9b26-0233f582bf66", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7fa77294d4de98937978ce54ff6af61db8e4b4f4331a66263dcc82f7ed97c5b5", Pod:"coredns-668d6bf9bc-m7xfp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali89bd0bda239", MAC:"66:ba:37:3b:3c:92", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:36.291306 containerd[1443]: 2025-07-07 06:09:36.288 [INFO][4707] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7fa77294d4de98937978ce54ff6af61db8e4b4f4331a66263dcc82f7ed97c5b5" Namespace="kube-system" Pod="coredns-668d6bf9bc-m7xfp" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--m7xfp-eth0" Jul 7 06:09:36.318174 systemd[1]: Started cri-containerd-dba2299d38a318825548af1f58301d0e182a991d7e575d4c9aa0d1605b619918.scope - libcontainer container dba2299d38a318825548af1f58301d0e182a991d7e575d4c9aa0d1605b619918. Jul 7 06:09:36.323188 containerd[1443]: time="2025-07-07T06:09:36.323080338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:09:36.323188 containerd[1443]: time="2025-07-07T06:09:36.323152550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:09:36.323188 containerd[1443]: time="2025-07-07T06:09:36.323167593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:36.323422 containerd[1443]: time="2025-07-07T06:09:36.323245725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:36.344174 systemd[1]: Started cri-containerd-7fa77294d4de98937978ce54ff6af61db8e4b4f4331a66263dcc82f7ed97c5b5.scope - libcontainer container 7fa77294d4de98937978ce54ff6af61db8e4b4f4331a66263dcc82f7ed97c5b5. Jul 7 06:09:36.357663 containerd[1443]: time="2025-07-07T06:09:36.357607127Z" level=info msg="StartContainer for \"dba2299d38a318825548af1f58301d0e182a991d7e575d4c9aa0d1605b619918\" returns successfully" Jul 7 06:09:36.365338 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:09:36.402782 containerd[1443]: time="2025-07-07T06:09:36.402726428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m7xfp,Uid:6a09f92b-f03b-46c8-9b26-0233f582bf66,Namespace:kube-system,Attempt:1,} returns sandbox id \"7fa77294d4de98937978ce54ff6af61db8e4b4f4331a66263dcc82f7ed97c5b5\"" Jul 7 06:09:36.403781 kubelet[2485]: E0707 06:09:36.403733 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:09:36.407340 containerd[1443]: time="2025-07-07T06:09:36.407086509Z" level=info msg="CreateContainer within sandbox \"7fa77294d4de98937978ce54ff6af61db8e4b4f4331a66263dcc82f7ed97c5b5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:09:36.422680 containerd[1443]: time="2025-07-07T06:09:36.422549426Z" level=info msg="CreateContainer within sandbox \"7fa77294d4de98937978ce54ff6af61db8e4b4f4331a66263dcc82f7ed97c5b5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f06e80e8609b7e119f8d29acc7942af3eff8a4432130ee861d036fd8066695b2\"" Jul 7 06:09:36.423275 containerd[1443]: time="2025-07-07T06:09:36.423196253Z" level=info msg="StartContainer for \"f06e80e8609b7e119f8d29acc7942af3eff8a4432130ee861d036fd8066695b2\"" Jul 7 06:09:36.462179 systemd[1]: Started cri-containerd-f06e80e8609b7e119f8d29acc7942af3eff8a4432130ee861d036fd8066695b2.scope - libcontainer container f06e80e8609b7e119f8d29acc7942af3eff8a4432130ee861d036fd8066695b2. Jul 7 06:09:36.498451 containerd[1443]: time="2025-07-07T06:09:36.496652280Z" level=info msg="StartContainer for \"f06e80e8609b7e119f8d29acc7942af3eff8a4432130ee861d036fd8066695b2\" returns successfully" Jul 7 06:09:36.894581 kubelet[2485]: E0707 06:09:36.894134 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:09:36.929251 kubelet[2485]: E0707 06:09:36.928256 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:09:36.940350 kubelet[2485]: I0707 06:09:36.940299 2485 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:09:36.963222 kubelet[2485]: I0707 06:09:36.963040 2485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-m7xfp" podStartSLOduration=35.963025159 podStartE2EDuration="35.963025159s" podCreationTimestamp="2025-07-07 06:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:09:36.922600595 +0000 UTC m=+41.385455371" watchObservedRunningTime="2025-07-07 06:09:36.963025159 +0000 UTC m=+41.425879895" Jul 7 06:09:36.965722 kubelet[2485]: I0707 06:09:36.965582 2485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wtdw2" podStartSLOduration=35.96556734 podStartE2EDuration="35.96556734s" podCreationTimestamp="2025-07-07 06:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:09:36.962322963 +0000 UTC m=+41.425177739" watchObservedRunningTime="2025-07-07 06:09:36.96556734 +0000 UTC m=+41.428422076" Jul 7 06:09:37.232408 systemd-networkd[1386]: cali800d6a288b4: Gained IPv6LL Jul 7 06:09:37.255774 systemd[1]: run-containerd-runc-k8s.io-dba2299d38a318825548af1f58301d0e182a991d7e575d4c9aa0d1605b619918-runc.aOpzA6.mount: Deactivated successfully. Jul 7 06:09:37.296762 systemd-networkd[1386]: calidb351f2839a: Gained IPv6LL Jul 7 06:09:37.360357 systemd-networkd[1386]: cali89bd0bda239: Gained IPv6LL Jul 7 06:09:37.424135 systemd-networkd[1386]: caliba6f51d8f7e: Gained IPv6LL Jul 7 06:09:37.451717 containerd[1443]: time="2025-07-07T06:09:37.451661644Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:37.452302 containerd[1443]: time="2025-07-07T06:09:37.452259461Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 7 06:09:37.453275 containerd[1443]: time="2025-07-07T06:09:37.453238859Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:37.455163 containerd[1443]: time="2025-07-07T06:09:37.455123963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:37.455818 containerd[1443]: time="2025-07-07T06:09:37.455727101Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 1.35051967s" Jul 7 06:09:37.455818 containerd[1443]: time="2025-07-07T06:09:37.455759026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 7 06:09:37.457226 containerd[1443]: time="2025-07-07T06:09:37.457191937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 06:09:37.463993 containerd[1443]: time="2025-07-07T06:09:37.463179824Z" level=info msg="CreateContainer within sandbox \"107574b8741ff8591c2cf2d0917064b9f3278cd8a11b89029909e8ce3a006ecc\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 06:09:37.479668 containerd[1443]: time="2025-07-07T06:09:37.479621159Z" level=info msg="CreateContainer within sandbox \"107574b8741ff8591c2cf2d0917064b9f3278cd8a11b89029909e8ce3a006ecc\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ae164fbde8b72cbd26b068b76de7f138abf57177176cbcb6aa0e5a0283018faa\"" Jul 7 06:09:37.480168 containerd[1443]: time="2025-07-07T06:09:37.480135962Z" level=info msg="StartContainer for \"ae164fbde8b72cbd26b068b76de7f138abf57177176cbcb6aa0e5a0283018faa\"" Jul 7 06:09:37.505168 systemd[1]: Started cri-containerd-ae164fbde8b72cbd26b068b76de7f138abf57177176cbcb6aa0e5a0283018faa.scope - libcontainer container ae164fbde8b72cbd26b068b76de7f138abf57177176cbcb6aa0e5a0283018faa. Jul 7 06:09:37.534018 containerd[1443]: time="2025-07-07T06:09:37.533950291Z" level=info msg="StartContainer for \"ae164fbde8b72cbd26b068b76de7f138abf57177176cbcb6aa0e5a0283018faa\" returns successfully" Jul 7 06:09:37.625952 containerd[1443]: time="2025-07-07T06:09:37.625904738Z" level=info msg="StopPodSandbox for \"3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529\"" Jul 7 06:09:37.626455 containerd[1443]: time="2025-07-07T06:09:37.626416861Z" level=info msg="StopPodSandbox for \"0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a\"" Jul 7 06:09:37.626923 containerd[1443]: time="2025-07-07T06:09:37.626768638Z" level=info msg="StopPodSandbox for \"781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f\"" Jul 7 06:09:37.750890 containerd[1443]: 2025-07-07 06:09:37.707 [INFO][5136] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" Jul 7 06:09:37.750890 containerd[1443]: 2025-07-07 06:09:37.707 [INFO][5136] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" iface="eth0" netns="/var/run/netns/cni-a008dee6-13c9-6198-e7fb-6f914300baed" Jul 7 06:09:37.750890 containerd[1443]: 2025-07-07 06:09:37.708 [INFO][5136] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" iface="eth0" netns="/var/run/netns/cni-a008dee6-13c9-6198-e7fb-6f914300baed" Jul 7 06:09:37.750890 containerd[1443]: 2025-07-07 06:09:37.708 [INFO][5136] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" iface="eth0" netns="/var/run/netns/cni-a008dee6-13c9-6198-e7fb-6f914300baed" Jul 7 06:09:37.750890 containerd[1443]: 2025-07-07 06:09:37.708 [INFO][5136] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" Jul 7 06:09:37.750890 containerd[1443]: 2025-07-07 06:09:37.708 [INFO][5136] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" Jul 7 06:09:37.750890 containerd[1443]: 2025-07-07 06:09:37.733 [INFO][5166] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" HandleID="k8s-pod-network.0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" Workload="localhost-k8s-calico--apiserver--586767dc6--st5cc-eth0" Jul 7 06:09:37.750890 containerd[1443]: 2025-07-07 06:09:37.733 [INFO][5166] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:37.750890 containerd[1443]: 2025-07-07 06:09:37.733 [INFO][5166] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:37.750890 containerd[1443]: 2025-07-07 06:09:37.743 [WARNING][5166] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" HandleID="k8s-pod-network.0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" Workload="localhost-k8s-calico--apiserver--586767dc6--st5cc-eth0" Jul 7 06:09:37.750890 containerd[1443]: 2025-07-07 06:09:37.743 [INFO][5166] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" HandleID="k8s-pod-network.0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" Workload="localhost-k8s-calico--apiserver--586767dc6--st5cc-eth0" Jul 7 06:09:37.750890 containerd[1443]: 2025-07-07 06:09:37.744 [INFO][5166] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:37.750890 containerd[1443]: 2025-07-07 06:09:37.746 [INFO][5136] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" Jul 7 06:09:37.750890 containerd[1443]: time="2025-07-07T06:09:37.750606193Z" level=info msg="TearDown network for sandbox \"0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a\" successfully" Jul 7 06:09:37.750890 containerd[1443]: time="2025-07-07T06:09:37.750644999Z" level=info msg="StopPodSandbox for \"0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a\" returns successfully" Jul 7 06:09:37.751456 containerd[1443]: time="2025-07-07T06:09:37.751384279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-586767dc6-st5cc,Uid:20b82c11-f0c8-4cab-bff0-a1f67bee9ab4,Namespace:calico-apiserver,Attempt:1,}" Jul 7 06:09:37.771537 containerd[1443]: 2025-07-07 06:09:37.723 [INFO][5134] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" Jul 7 06:09:37.771537 containerd[1443]: 2025-07-07 06:09:37.723 [INFO][5134] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" iface="eth0" netns="/var/run/netns/cni-3cb9373b-92c0-9426-10a5-ad16d397e2f0" Jul 7 06:09:37.771537 containerd[1443]: 2025-07-07 06:09:37.723 [INFO][5134] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" iface="eth0" netns="/var/run/netns/cni-3cb9373b-92c0-9426-10a5-ad16d397e2f0" Jul 7 06:09:37.771537 containerd[1443]: 2025-07-07 06:09:37.723 [INFO][5134] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" iface="eth0" netns="/var/run/netns/cni-3cb9373b-92c0-9426-10a5-ad16d397e2f0" Jul 7 06:09:37.771537 containerd[1443]: 2025-07-07 06:09:37.723 [INFO][5134] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" Jul 7 06:09:37.771537 containerd[1443]: 2025-07-07 06:09:37.723 [INFO][5134] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" Jul 7 06:09:37.771537 containerd[1443]: 2025-07-07 06:09:37.753 [INFO][5173] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" HandleID="k8s-pod-network.781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" Workload="localhost-k8s-csi--node--driver--txlwn-eth0" Jul 7 06:09:37.771537 containerd[1443]: 2025-07-07 06:09:37.753 [INFO][5173] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:37.771537 containerd[1443]: 2025-07-07 06:09:37.756 [INFO][5173] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:37.771537 containerd[1443]: 2025-07-07 06:09:37.765 [WARNING][5173] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" HandleID="k8s-pod-network.781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" Workload="localhost-k8s-csi--node--driver--txlwn-eth0" Jul 7 06:09:37.771537 containerd[1443]: 2025-07-07 06:09:37.765 [INFO][5173] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" HandleID="k8s-pod-network.781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" Workload="localhost-k8s-csi--node--driver--txlwn-eth0" Jul 7 06:09:37.771537 containerd[1443]: 2025-07-07 06:09:37.767 [INFO][5173] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:37.771537 containerd[1443]: 2025-07-07 06:09:37.769 [INFO][5134] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" Jul 7 06:09:37.772114 containerd[1443]: time="2025-07-07T06:09:37.771676875Z" level=info msg="TearDown network for sandbox \"781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f\" successfully" Jul 7 06:09:37.772114 containerd[1443]: time="2025-07-07T06:09:37.771704320Z" level=info msg="StopPodSandbox for \"781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f\" returns successfully" Jul 7 06:09:37.772476 containerd[1443]: time="2025-07-07T06:09:37.772448080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-txlwn,Uid:5705c009-0d57-436d-b155-b8ac4388465f,Namespace:calico-system,Attempt:1,}" Jul 7 06:09:37.790519 containerd[1443]: 2025-07-07 06:09:37.702 [INFO][5135] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" Jul 7 06:09:37.790519 containerd[1443]: 2025-07-07 06:09:37.702 [INFO][5135] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" iface="eth0" netns="/var/run/netns/cni-17bc82b0-d6e9-92ea-e692-b8410f3c9db8" Jul 7 06:09:37.790519 containerd[1443]: 2025-07-07 06:09:37.702 [INFO][5135] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" iface="eth0" netns="/var/run/netns/cni-17bc82b0-d6e9-92ea-e692-b8410f3c9db8" Jul 7 06:09:37.790519 containerd[1443]: 2025-07-07 06:09:37.702 [INFO][5135] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" iface="eth0" netns="/var/run/netns/cni-17bc82b0-d6e9-92ea-e692-b8410f3c9db8" Jul 7 06:09:37.790519 containerd[1443]: 2025-07-07 06:09:37.702 [INFO][5135] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" Jul 7 06:09:37.790519 containerd[1443]: 2025-07-07 06:09:37.703 [INFO][5135] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" Jul 7 06:09:37.790519 containerd[1443]: 2025-07-07 06:09:37.737 [INFO][5159] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" HandleID="k8s-pod-network.3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" Workload="localhost-k8s-calico--apiserver--5d7db7788--k7kxf-eth0" Jul 7 06:09:37.790519 containerd[1443]: 2025-07-07 06:09:37.737 [INFO][5159] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:37.790519 containerd[1443]: 2025-07-07 06:09:37.744 [INFO][5159] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:37.790519 containerd[1443]: 2025-07-07 06:09:37.754 [WARNING][5159] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" HandleID="k8s-pod-network.3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" Workload="localhost-k8s-calico--apiserver--5d7db7788--k7kxf-eth0" Jul 7 06:09:37.790519 containerd[1443]: 2025-07-07 06:09:37.754 [INFO][5159] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" HandleID="k8s-pod-network.3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" Workload="localhost-k8s-calico--apiserver--5d7db7788--k7kxf-eth0" Jul 7 06:09:37.790519 containerd[1443]: 2025-07-07 06:09:37.756 [INFO][5159] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:37.790519 containerd[1443]: 2025-07-07 06:09:37.758 [INFO][5135] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" Jul 7 06:09:37.790858 containerd[1443]: time="2025-07-07T06:09:37.790756196Z" level=info msg="TearDown network for sandbox \"3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529\" successfully" Jul 7 06:09:37.790858 containerd[1443]: time="2025-07-07T06:09:37.790776279Z" level=info msg="StopPodSandbox for \"3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529\" returns successfully" Jul 7 06:09:37.791412 containerd[1443]: time="2025-07-07T06:09:37.791384097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7db7788-k7kxf,Uid:2b552d16-8bbf-4c9c-b453-0c942c087079,Namespace:calico-apiserver,Attempt:1,}" Jul 7 06:09:37.917953 systemd-networkd[1386]: cali342ddfb41ef: Link UP Jul 7 06:09:37.918875 systemd-networkd[1386]: cali342ddfb41ef: Gained carrier Jul 7 06:09:37.933696 containerd[1443]: 2025-07-07 06:09:37.845 [INFO][5185] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--txlwn-eth0 csi-node-driver- calico-system 5705c009-0d57-436d-b155-b8ac4388465f 1072 0 2025-07-07 06:09:14 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-txlwn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali342ddfb41ef [] [] }} ContainerID="338c3a0b35ce1f6160acb162d80c7ed860ddf6f14a2faa22652f75e3f14a1908" Namespace="calico-system" Pod="csi-node-driver-txlwn" WorkloadEndpoint="localhost-k8s-csi--node--driver--txlwn-" Jul 7 06:09:37.933696 containerd[1443]: 2025-07-07 06:09:37.845 [INFO][5185] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="338c3a0b35ce1f6160acb162d80c7ed860ddf6f14a2faa22652f75e3f14a1908" Namespace="calico-system" Pod="csi-node-driver-txlwn" WorkloadEndpoint="localhost-k8s-csi--node--driver--txlwn-eth0" Jul 7 06:09:37.933696 containerd[1443]: 2025-07-07 06:09:37.874 [INFO][5228] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="338c3a0b35ce1f6160acb162d80c7ed860ddf6f14a2faa22652f75e3f14a1908" HandleID="k8s-pod-network.338c3a0b35ce1f6160acb162d80c7ed860ddf6f14a2faa22652f75e3f14a1908" Workload="localhost-k8s-csi--node--driver--txlwn-eth0" Jul 7 06:09:37.933696 containerd[1443]: 2025-07-07 06:09:37.874 [INFO][5228] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="338c3a0b35ce1f6160acb162d80c7ed860ddf6f14a2faa22652f75e3f14a1908" HandleID="k8s-pod-network.338c3a0b35ce1f6160acb162d80c7ed860ddf6f14a2faa22652f75e3f14a1908" Workload="localhost-k8s-csi--node--driver--txlwn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3240), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-txlwn", "timestamp":"2025-07-07 06:09:37.8742656 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:09:37.933696 containerd[1443]: 2025-07-07 06:09:37.874 [INFO][5228] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:37.933696 containerd[1443]: 2025-07-07 06:09:37.874 [INFO][5228] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:37.933696 containerd[1443]: 2025-07-07 06:09:37.874 [INFO][5228] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:09:37.933696 containerd[1443]: 2025-07-07 06:09:37.883 [INFO][5228] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.338c3a0b35ce1f6160acb162d80c7ed860ddf6f14a2faa22652f75e3f14a1908" host="localhost" Jul 7 06:09:37.933696 containerd[1443]: 2025-07-07 06:09:37.888 [INFO][5228] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:09:37.933696 containerd[1443]: 2025-07-07 06:09:37.892 [INFO][5228] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:09:37.933696 containerd[1443]: 2025-07-07 06:09:37.894 [INFO][5228] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:09:37.933696 containerd[1443]: 2025-07-07 06:09:37.896 [INFO][5228] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:09:37.933696 containerd[1443]: 2025-07-07 06:09:37.896 [INFO][5228] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.338c3a0b35ce1f6160acb162d80c7ed860ddf6f14a2faa22652f75e3f14a1908" host="localhost" Jul 7 06:09:37.933696 containerd[1443]: 2025-07-07 06:09:37.898 [INFO][5228] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.338c3a0b35ce1f6160acb162d80c7ed860ddf6f14a2faa22652f75e3f14a1908 Jul 7 06:09:37.933696 containerd[1443]: 2025-07-07 06:09:37.902 [INFO][5228] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.338c3a0b35ce1f6160acb162d80c7ed860ddf6f14a2faa22652f75e3f14a1908" host="localhost" Jul 7 06:09:37.933696 containerd[1443]: 2025-07-07 06:09:37.907 [INFO][5228] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.338c3a0b35ce1f6160acb162d80c7ed860ddf6f14a2faa22652f75e3f14a1908" host="localhost" Jul 7 06:09:37.933696 containerd[1443]: 2025-07-07 06:09:37.908 [INFO][5228] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.338c3a0b35ce1f6160acb162d80c7ed860ddf6f14a2faa22652f75e3f14a1908" host="localhost" Jul 7 06:09:37.933696 containerd[1443]: 2025-07-07 06:09:37.908 [INFO][5228] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:37.933696 containerd[1443]: 2025-07-07 06:09:37.908 [INFO][5228] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="338c3a0b35ce1f6160acb162d80c7ed860ddf6f14a2faa22652f75e3f14a1908" HandleID="k8s-pod-network.338c3a0b35ce1f6160acb162d80c7ed860ddf6f14a2faa22652f75e3f14a1908" Workload="localhost-k8s-csi--node--driver--txlwn-eth0" Jul 7 06:09:37.934245 containerd[1443]: 2025-07-07 06:09:37.910 [INFO][5185] cni-plugin/k8s.go 418: Populated endpoint ContainerID="338c3a0b35ce1f6160acb162d80c7ed860ddf6f14a2faa22652f75e3f14a1908" Namespace="calico-system" Pod="csi-node-driver-txlwn" WorkloadEndpoint="localhost-k8s-csi--node--driver--txlwn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--txlwn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5705c009-0d57-436d-b155-b8ac4388465f", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-txlwn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali342ddfb41ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:37.934245 containerd[1443]: 2025-07-07 06:09:37.911 [INFO][5185] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="338c3a0b35ce1f6160acb162d80c7ed860ddf6f14a2faa22652f75e3f14a1908" Namespace="calico-system" Pod="csi-node-driver-txlwn" WorkloadEndpoint="localhost-k8s-csi--node--driver--txlwn-eth0" Jul 7 06:09:37.934245 containerd[1443]: 2025-07-07 06:09:37.911 [INFO][5185] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali342ddfb41ef ContainerID="338c3a0b35ce1f6160acb162d80c7ed860ddf6f14a2faa22652f75e3f14a1908" Namespace="calico-system" Pod="csi-node-driver-txlwn" WorkloadEndpoint="localhost-k8s-csi--node--driver--txlwn-eth0" Jul 7 06:09:37.934245 containerd[1443]: 2025-07-07 06:09:37.918 [INFO][5185] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="338c3a0b35ce1f6160acb162d80c7ed860ddf6f14a2faa22652f75e3f14a1908" Namespace="calico-system" Pod="csi-node-driver-txlwn" WorkloadEndpoint="localhost-k8s-csi--node--driver--txlwn-eth0" Jul 7 06:09:37.934245 containerd[1443]: 2025-07-07 06:09:37.920 [INFO][5185] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="338c3a0b35ce1f6160acb162d80c7ed860ddf6f14a2faa22652f75e3f14a1908" Namespace="calico-system" Pod="csi-node-driver-txlwn" WorkloadEndpoint="localhost-k8s-csi--node--driver--txlwn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--txlwn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5705c009-0d57-436d-b155-b8ac4388465f", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"338c3a0b35ce1f6160acb162d80c7ed860ddf6f14a2faa22652f75e3f14a1908", Pod:"csi-node-driver-txlwn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali342ddfb41ef", MAC:"be:da:ae:f4:f3:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:37.934245 containerd[1443]: 2025-07-07 06:09:37.931 [INFO][5185] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="338c3a0b35ce1f6160acb162d80c7ed860ddf6f14a2faa22652f75e3f14a1908" Namespace="calico-system" Pod="csi-node-driver-txlwn" WorkloadEndpoint="localhost-k8s-csi--node--driver--txlwn-eth0" Jul 7 06:09:37.946529 kubelet[2485]: E0707 06:09:37.946080 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:09:37.946529 kubelet[2485]: E0707 06:09:37.946215 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:09:37.952652 containerd[1443]: time="2025-07-07T06:09:37.952438302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:09:37.952652 containerd[1443]: time="2025-07-07T06:09:37.952508873Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:09:37.952652 containerd[1443]: time="2025-07-07T06:09:37.952523675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:37.953833 containerd[1443]: time="2025-07-07T06:09:37.952610810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:37.966119 kubelet[2485]: I0707 06:09:37.957453 2485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7fbfd84b85-7qqnq" podStartSLOduration=22.60478701 podStartE2EDuration="23.957435429s" podCreationTimestamp="2025-07-07 06:09:14 +0000 UTC" firstStartedPulling="2025-07-07 06:09:36.103987829 +0000 UTC m=+40.566842565" lastFinishedPulling="2025-07-07 06:09:37.456636288 +0000 UTC m=+41.919490984" observedRunningTime="2025-07-07 06:09:37.956528802 +0000 UTC m=+42.419383538" watchObservedRunningTime="2025-07-07 06:09:37.957435429 +0000 UTC m=+42.420290165" Jul 7 06:09:37.982596 systemd[1]: Started cri-containerd-338c3a0b35ce1f6160acb162d80c7ed860ddf6f14a2faa22652f75e3f14a1908.scope - libcontainer container 338c3a0b35ce1f6160acb162d80c7ed860ddf6f14a2faa22652f75e3f14a1908. Jul 7 06:09:38.040198 systemd-networkd[1386]: cali05322329496: Link UP Jul 7 06:09:38.042749 systemd-networkd[1386]: cali05322329496: Gained carrier Jul 7 06:09:38.053815 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:09:38.060157 containerd[1443]: 2025-07-07 06:09:37.845 [INFO][5191] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--586767dc6--st5cc-eth0 calico-apiserver-586767dc6- calico-apiserver 20b82c11-f0c8-4cab-bff0-a1f67bee9ab4 1071 0 2025-07-07 06:09:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:586767dc6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-586767dc6-st5cc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali05322329496 [] [] }} ContainerID="79649b17cb9db823c3ae29c3c9f1cfdbe6675a58e90730191082d0f61d08b10f" Namespace="calico-apiserver" Pod="calico-apiserver-586767dc6-st5cc" WorkloadEndpoint="localhost-k8s-calico--apiserver--586767dc6--st5cc-" Jul 7 06:09:38.060157 containerd[1443]: 2025-07-07 06:09:37.845 [INFO][5191] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="79649b17cb9db823c3ae29c3c9f1cfdbe6675a58e90730191082d0f61d08b10f" Namespace="calico-apiserver" Pod="calico-apiserver-586767dc6-st5cc" WorkloadEndpoint="localhost-k8s-calico--apiserver--586767dc6--st5cc-eth0" Jul 7 06:09:38.060157 containerd[1443]: 2025-07-07 06:09:37.878 [INFO][5230] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79649b17cb9db823c3ae29c3c9f1cfdbe6675a58e90730191082d0f61d08b10f" HandleID="k8s-pod-network.79649b17cb9db823c3ae29c3c9f1cfdbe6675a58e90730191082d0f61d08b10f" Workload="localhost-k8s-calico--apiserver--586767dc6--st5cc-eth0" Jul 7 06:09:38.060157 containerd[1443]: 2025-07-07 06:09:37.878 [INFO][5230] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="79649b17cb9db823c3ae29c3c9f1cfdbe6675a58e90730191082d0f61d08b10f" HandleID="k8s-pod-network.79649b17cb9db823c3ae29c3c9f1cfdbe6675a58e90730191082d0f61d08b10f" Workload="localhost-k8s-calico--apiserver--586767dc6--st5cc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cec0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-586767dc6-st5cc", "timestamp":"2025-07-07 06:09:37.878720559 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:09:38.060157 containerd[1443]: 2025-07-07 06:09:37.878 [INFO][5230] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:38.060157 containerd[1443]: 2025-07-07 06:09:37.908 [INFO][5230] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:38.060157 containerd[1443]: 2025-07-07 06:09:37.908 [INFO][5230] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:09:38.060157 containerd[1443]: 2025-07-07 06:09:37.984 [INFO][5230] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.79649b17cb9db823c3ae29c3c9f1cfdbe6675a58e90730191082d0f61d08b10f" host="localhost" Jul 7 06:09:38.060157 containerd[1443]: 2025-07-07 06:09:37.993 [INFO][5230] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:09:38.060157 containerd[1443]: 2025-07-07 06:09:38.002 [INFO][5230] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:09:38.060157 containerd[1443]: 2025-07-07 06:09:38.008 [INFO][5230] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:09:38.060157 containerd[1443]: 2025-07-07 06:09:38.012 [INFO][5230] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:09:38.060157 containerd[1443]: 2025-07-07 06:09:38.012 [INFO][5230] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.79649b17cb9db823c3ae29c3c9f1cfdbe6675a58e90730191082d0f61d08b10f" host="localhost" Jul 7 06:09:38.060157 containerd[1443]: 2025-07-07 06:09:38.014 [INFO][5230] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.79649b17cb9db823c3ae29c3c9f1cfdbe6675a58e90730191082d0f61d08b10f Jul 7 06:09:38.060157 containerd[1443]: 2025-07-07 06:09:38.018 [INFO][5230] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.79649b17cb9db823c3ae29c3c9f1cfdbe6675a58e90730191082d0f61d08b10f" host="localhost" Jul 7 06:09:38.060157 containerd[1443]: 2025-07-07 06:09:38.026 [INFO][5230] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.79649b17cb9db823c3ae29c3c9f1cfdbe6675a58e90730191082d0f61d08b10f" host="localhost" Jul 7 06:09:38.060157 containerd[1443]: 2025-07-07 06:09:38.026 [INFO][5230] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.79649b17cb9db823c3ae29c3c9f1cfdbe6675a58e90730191082d0f61d08b10f" host="localhost" Jul 7 06:09:38.060157 containerd[1443]: 2025-07-07 06:09:38.026 [INFO][5230] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:38.060157 containerd[1443]: 2025-07-07 06:09:38.026 [INFO][5230] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="79649b17cb9db823c3ae29c3c9f1cfdbe6675a58e90730191082d0f61d08b10f" HandleID="k8s-pod-network.79649b17cb9db823c3ae29c3c9f1cfdbe6675a58e90730191082d0f61d08b10f" Workload="localhost-k8s-calico--apiserver--586767dc6--st5cc-eth0" Jul 7 06:09:38.060739 containerd[1443]: 2025-07-07 06:09:38.031 [INFO][5191] cni-plugin/k8s.go 418: Populated endpoint ContainerID="79649b17cb9db823c3ae29c3c9f1cfdbe6675a58e90730191082d0f61d08b10f" Namespace="calico-apiserver" Pod="calico-apiserver-586767dc6-st5cc" WorkloadEndpoint="localhost-k8s-calico--apiserver--586767dc6--st5cc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--586767dc6--st5cc-eth0", GenerateName:"calico-apiserver-586767dc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"20b82c11-f0c8-4cab-bff0-a1f67bee9ab4", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"586767dc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-586767dc6-st5cc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali05322329496", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:38.060739 containerd[1443]: 2025-07-07 06:09:38.031 [INFO][5191] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="79649b17cb9db823c3ae29c3c9f1cfdbe6675a58e90730191082d0f61d08b10f" Namespace="calico-apiserver" Pod="calico-apiserver-586767dc6-st5cc" WorkloadEndpoint="localhost-k8s-calico--apiserver--586767dc6--st5cc-eth0" Jul 7 06:09:38.060739 containerd[1443]: 2025-07-07 06:09:38.032 [INFO][5191] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali05322329496 ContainerID="79649b17cb9db823c3ae29c3c9f1cfdbe6675a58e90730191082d0f61d08b10f" Namespace="calico-apiserver" Pod="calico-apiserver-586767dc6-st5cc" WorkloadEndpoint="localhost-k8s-calico--apiserver--586767dc6--st5cc-eth0" Jul 7 06:09:38.060739 containerd[1443]: 2025-07-07 06:09:38.044 [INFO][5191] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="79649b17cb9db823c3ae29c3c9f1cfdbe6675a58e90730191082d0f61d08b10f" Namespace="calico-apiserver" Pod="calico-apiserver-586767dc6-st5cc" WorkloadEndpoint="localhost-k8s-calico--apiserver--586767dc6--st5cc-eth0" Jul 7 06:09:38.060739 containerd[1443]: 2025-07-07 06:09:38.045 [INFO][5191] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="79649b17cb9db823c3ae29c3c9f1cfdbe6675a58e90730191082d0f61d08b10f" Namespace="calico-apiserver" Pod="calico-apiserver-586767dc6-st5cc" WorkloadEndpoint="localhost-k8s-calico--apiserver--586767dc6--st5cc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--586767dc6--st5cc-eth0", GenerateName:"calico-apiserver-586767dc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"20b82c11-f0c8-4cab-bff0-a1f67bee9ab4", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"586767dc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"79649b17cb9db823c3ae29c3c9f1cfdbe6675a58e90730191082d0f61d08b10f", Pod:"calico-apiserver-586767dc6-st5cc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali05322329496", MAC:"7e:1f:46:1a:08:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:38.060739 containerd[1443]: 2025-07-07 06:09:38.055 [INFO][5191] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="79649b17cb9db823c3ae29c3c9f1cfdbe6675a58e90730191082d0f61d08b10f" Namespace="calico-apiserver" Pod="calico-apiserver-586767dc6-st5cc" WorkloadEndpoint="localhost-k8s-calico--apiserver--586767dc6--st5cc-eth0" Jul 7 06:09:38.068031 containerd[1443]: time="2025-07-07T06:09:38.067982953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-txlwn,Uid:5705c009-0d57-436d-b155-b8ac4388465f,Namespace:calico-system,Attempt:1,} returns sandbox id \"338c3a0b35ce1f6160acb162d80c7ed860ddf6f14a2faa22652f75e3f14a1908\"" Jul 7 06:09:38.109204 containerd[1443]: time="2025-07-07T06:09:38.109043153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:09:38.109204 containerd[1443]: time="2025-07-07T06:09:38.109164932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:09:38.109204 containerd[1443]: time="2025-07-07T06:09:38.109182095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:38.110568 containerd[1443]: time="2025-07-07T06:09:38.109710978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:38.127225 systemd-networkd[1386]: cali19f2fb3a54e: Link UP Jul 7 06:09:38.128138 systemd[1]: Started cri-containerd-79649b17cb9db823c3ae29c3c9f1cfdbe6675a58e90730191082d0f61d08b10f.scope - libcontainer container 79649b17cb9db823c3ae29c3c9f1cfdbe6675a58e90730191082d0f61d08b10f. Jul 7 06:09:38.128827 systemd-networkd[1386]: cali19f2fb3a54e: Gained carrier Jul 7 06:09:38.144145 containerd[1443]: 2025-07-07 06:09:37.863 [INFO][5206] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5d7db7788--k7kxf-eth0 calico-apiserver-5d7db7788- calico-apiserver 2b552d16-8bbf-4c9c-b453-0c942c087079 1070 0 2025-07-07 06:09:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d7db7788 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5d7db7788-k7kxf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali19f2fb3a54e [] [] }} ContainerID="e79b6a98f0d7872cfb29cf7486c28f309ccbdd8c8127548beace891b05fdfdf4" Namespace="calico-apiserver" Pod="calico-apiserver-5d7db7788-k7kxf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7db7788--k7kxf-" Jul 7 06:09:38.144145 containerd[1443]: 2025-07-07 06:09:37.863 [INFO][5206] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e79b6a98f0d7872cfb29cf7486c28f309ccbdd8c8127548beace891b05fdfdf4" Namespace="calico-apiserver" Pod="calico-apiserver-5d7db7788-k7kxf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7db7788--k7kxf-eth0" Jul 7 06:09:38.144145 containerd[1443]: 2025-07-07 06:09:37.888 [INFO][5242] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e79b6a98f0d7872cfb29cf7486c28f309ccbdd8c8127548beace891b05fdfdf4" HandleID="k8s-pod-network.e79b6a98f0d7872cfb29cf7486c28f309ccbdd8c8127548beace891b05fdfdf4" Workload="localhost-k8s-calico--apiserver--5d7db7788--k7kxf-eth0" Jul 7 06:09:38.144145 containerd[1443]: 2025-07-07 06:09:37.888 [INFO][5242] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e79b6a98f0d7872cfb29cf7486c28f309ccbdd8c8127548beace891b05fdfdf4" HandleID="k8s-pod-network.e79b6a98f0d7872cfb29cf7486c28f309ccbdd8c8127548beace891b05fdfdf4" Workload="localhost-k8s-calico--apiserver--5d7db7788--k7kxf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004251e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5d7db7788-k7kxf", "timestamp":"2025-07-07 06:09:37.888313788 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:09:38.144145 containerd[1443]: 2025-07-07 06:09:37.888 [INFO][5242] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:38.144145 containerd[1443]: 2025-07-07 06:09:38.026 [INFO][5242] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:38.144145 containerd[1443]: 2025-07-07 06:09:38.026 [INFO][5242] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:09:38.144145 containerd[1443]: 2025-07-07 06:09:38.086 [INFO][5242] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e79b6a98f0d7872cfb29cf7486c28f309ccbdd8c8127548beace891b05fdfdf4" host="localhost" Jul 7 06:09:38.144145 containerd[1443]: 2025-07-07 06:09:38.094 [INFO][5242] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:09:38.144145 containerd[1443]: 2025-07-07 06:09:38.098 [INFO][5242] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:09:38.144145 containerd[1443]: 2025-07-07 06:09:38.100 [INFO][5242] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:09:38.144145 containerd[1443]: 2025-07-07 06:09:38.103 [INFO][5242] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:09:38.144145 containerd[1443]: 2025-07-07 06:09:38.103 [INFO][5242] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e79b6a98f0d7872cfb29cf7486c28f309ccbdd8c8127548beace891b05fdfdf4" host="localhost" Jul 7 06:09:38.144145 containerd[1443]: 2025-07-07 06:09:38.105 [INFO][5242] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e79b6a98f0d7872cfb29cf7486c28f309ccbdd8c8127548beace891b05fdfdf4 Jul 7 06:09:38.144145 containerd[1443]: 2025-07-07 06:09:38.110 [INFO][5242] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e79b6a98f0d7872cfb29cf7486c28f309ccbdd8c8127548beace891b05fdfdf4" host="localhost" Jul 7 06:09:38.144145 containerd[1443]: 2025-07-07 06:09:38.122 [INFO][5242] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.e79b6a98f0d7872cfb29cf7486c28f309ccbdd8c8127548beace891b05fdfdf4" host="localhost" Jul 7 06:09:38.144145 containerd[1443]: 2025-07-07 06:09:38.122 [INFO][5242] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.e79b6a98f0d7872cfb29cf7486c28f309ccbdd8c8127548beace891b05fdfdf4" host="localhost" Jul 7 06:09:38.144145 containerd[1443]: 2025-07-07 06:09:38.122 [INFO][5242] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:38.144145 containerd[1443]: 2025-07-07 06:09:38.122 [INFO][5242] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="e79b6a98f0d7872cfb29cf7486c28f309ccbdd8c8127548beace891b05fdfdf4" HandleID="k8s-pod-network.e79b6a98f0d7872cfb29cf7486c28f309ccbdd8c8127548beace891b05fdfdf4" Workload="localhost-k8s-calico--apiserver--5d7db7788--k7kxf-eth0" Jul 7 06:09:38.145094 containerd[1443]: 2025-07-07 06:09:38.124 [INFO][5206] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e79b6a98f0d7872cfb29cf7486c28f309ccbdd8c8127548beace891b05fdfdf4" Namespace="calico-apiserver" Pod="calico-apiserver-5d7db7788-k7kxf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7db7788--k7kxf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d7db7788--k7kxf-eth0", GenerateName:"calico-apiserver-5d7db7788-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b552d16-8bbf-4c9c-b453-0c942c087079", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7db7788", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5d7db7788-k7kxf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali19f2fb3a54e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:38.145094 containerd[1443]: 2025-07-07 06:09:38.124 [INFO][5206] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="e79b6a98f0d7872cfb29cf7486c28f309ccbdd8c8127548beace891b05fdfdf4" Namespace="calico-apiserver" Pod="calico-apiserver-5d7db7788-k7kxf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7db7788--k7kxf-eth0" Jul 7 06:09:38.145094 containerd[1443]: 2025-07-07 06:09:38.124 [INFO][5206] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali19f2fb3a54e ContainerID="e79b6a98f0d7872cfb29cf7486c28f309ccbdd8c8127548beace891b05fdfdf4" Namespace="calico-apiserver" Pod="calico-apiserver-5d7db7788-k7kxf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7db7788--k7kxf-eth0" Jul 7 06:09:38.145094 containerd[1443]: 2025-07-07 06:09:38.130 [INFO][5206] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e79b6a98f0d7872cfb29cf7486c28f309ccbdd8c8127548beace891b05fdfdf4" Namespace="calico-apiserver" Pod="calico-apiserver-5d7db7788-k7kxf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7db7788--k7kxf-eth0" Jul 7 06:09:38.145094 containerd[1443]: 2025-07-07 06:09:38.131 [INFO][5206] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e79b6a98f0d7872cfb29cf7486c28f309ccbdd8c8127548beace891b05fdfdf4" Namespace="calico-apiserver" Pod="calico-apiserver-5d7db7788-k7kxf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7db7788--k7kxf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d7db7788--k7kxf-eth0", GenerateName:"calico-apiserver-5d7db7788-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b552d16-8bbf-4c9c-b453-0c942c087079", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7db7788", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e79b6a98f0d7872cfb29cf7486c28f309ccbdd8c8127548beace891b05fdfdf4", Pod:"calico-apiserver-5d7db7788-k7kxf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali19f2fb3a54e", MAC:"b6:6c:19:85:50:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:38.145094 containerd[1443]: 2025-07-07 06:09:38.141 [INFO][5206] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e79b6a98f0d7872cfb29cf7486c28f309ccbdd8c8127548beace891b05fdfdf4" Namespace="calico-apiserver" Pod="calico-apiserver-5d7db7788-k7kxf" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7db7788--k7kxf-eth0" Jul 7 06:09:38.147481 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:09:38.164359 containerd[1443]: time="2025-07-07T06:09:38.164225261Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:09:38.164359 containerd[1443]: time="2025-07-07T06:09:38.164290872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:09:38.164359 containerd[1443]: time="2025-07-07T06:09:38.164306154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:38.164925 containerd[1443]: time="2025-07-07T06:09:38.164732101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:38.170895 containerd[1443]: time="2025-07-07T06:09:38.170861709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-586767dc6-st5cc,Uid:20b82c11-f0c8-4cab-bff0-a1f67bee9ab4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"79649b17cb9db823c3ae29c3c9f1cfdbe6675a58e90730191082d0f61d08b10f\"" Jul 7 06:09:38.173915 containerd[1443]: time="2025-07-07T06:09:38.173883025Z" level=info msg="CreateContainer within sandbox \"79649b17cb9db823c3ae29c3c9f1cfdbe6675a58e90730191082d0f61d08b10f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:09:38.185141 systemd[1]: Started cri-containerd-e79b6a98f0d7872cfb29cf7486c28f309ccbdd8c8127548beace891b05fdfdf4.scope - libcontainer container e79b6a98f0d7872cfb29cf7486c28f309ccbdd8c8127548beace891b05fdfdf4. Jul 7 06:09:38.196469 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:09:38.211114 containerd[1443]: time="2025-07-07T06:09:38.211066213Z" level=info msg="CreateContainer within sandbox \"79649b17cb9db823c3ae29c3c9f1cfdbe6675a58e90730191082d0f61d08b10f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"57d11c380a0cc31f0d18ae9933fe90f6affdfcc4e21ce680f873b32d80385e30\"" Jul 7 06:09:38.211788 containerd[1443]: time="2025-07-07T06:09:38.211745921Z" level=info msg="StartContainer for \"57d11c380a0cc31f0d18ae9933fe90f6affdfcc4e21ce680f873b32d80385e30\"" Jul 7 06:09:38.213355 containerd[1443]: time="2025-07-07T06:09:38.213315248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7db7788-k7kxf,Uid:2b552d16-8bbf-4c9c-b453-0c942c087079,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e79b6a98f0d7872cfb29cf7486c28f309ccbdd8c8127548beace891b05fdfdf4\"" Jul 7 06:09:38.216153 containerd[1443]: time="2025-07-07T06:09:38.216122051Z" level=info msg="CreateContainer within sandbox \"e79b6a98f0d7872cfb29cf7486c28f309ccbdd8c8127548beace891b05fdfdf4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:09:38.241236 systemd[1]: Started cri-containerd-57d11c380a0cc31f0d18ae9933fe90f6affdfcc4e21ce680f873b32d80385e30.scope - libcontainer container 57d11c380a0cc31f0d18ae9933fe90f6affdfcc4e21ce680f873b32d80385e30. Jul 7 06:09:38.261549 systemd[1]: run-netns-cni\x2d3cb9373b\x2d92c0\x2d9426\x2d10a5\x2dad16d397e2f0.mount: Deactivated successfully. Jul 7 06:09:38.261650 systemd[1]: run-netns-cni\x2da008dee6\x2d13c9\x2d6198\x2de7fb\x2d6f914300baed.mount: Deactivated successfully. Jul 7 06:09:38.261746 systemd[1]: run-netns-cni\x2d17bc82b0\x2dd6e9\x2d92ea\x2de692\x2db8410f3c9db8.mount: Deactivated successfully. Jul 7 06:09:38.265198 containerd[1443]: time="2025-07-07T06:09:38.265148388Z" level=info msg="CreateContainer within sandbox \"e79b6a98f0d7872cfb29cf7486c28f309ccbdd8c8127548beace891b05fdfdf4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"eb43e2e7f8ebe3d02bdb24c3e4c6d99124af0cbd1686c83f7a83c8f3a8476021\"" Jul 7 06:09:38.265930 containerd[1443]: time="2025-07-07T06:09:38.265879784Z" level=info msg="StartContainer for \"eb43e2e7f8ebe3d02bdb24c3e4c6d99124af0cbd1686c83f7a83c8f3a8476021\"" Jul 7 06:09:38.297537 containerd[1443]: time="2025-07-07T06:09:38.297418361Z" level=info msg="StartContainer for \"57d11c380a0cc31f0d18ae9933fe90f6affdfcc4e21ce680f873b32d80385e30\" returns successfully" Jul 7 06:09:38.305570 systemd[1]: Started cri-containerd-eb43e2e7f8ebe3d02bdb24c3e4c6d99124af0cbd1686c83f7a83c8f3a8476021.scope - libcontainer container eb43e2e7f8ebe3d02bdb24c3e4c6d99124af0cbd1686c83f7a83c8f3a8476021. Jul 7 06:09:38.346335 containerd[1443]: time="2025-07-07T06:09:38.346227983Z" level=info msg="StartContainer for \"eb43e2e7f8ebe3d02bdb24c3e4c6d99124af0cbd1686c83f7a83c8f3a8476021\" returns successfully" Jul 7 06:09:38.448201 systemd[1]: Started sshd@8-10.0.0.114:22-10.0.0.1:52542.service - OpenSSH per-connection server daemon (10.0.0.1:52542). Jul 7 06:09:38.503723 sshd[5487]: Accepted publickey for core from 10.0.0.1 port 52542 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:09:38.504909 sshd[5487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:09:38.514877 systemd-logind[1425]: New session 9 of user core. Jul 7 06:09:38.521121 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 06:09:38.940440 sshd[5487]: pam_unix(sshd:session): session closed for user core Jul 7 06:09:38.945334 systemd[1]: sshd@8-10.0.0.114:22-10.0.0.1:52542.service: Deactivated successfully. Jul 7 06:09:38.950130 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 06:09:38.951916 systemd-logind[1425]: Session 9 logged out. Waiting for processes to exit. Jul 7 06:09:38.953771 systemd-logind[1425]: Removed session 9. Jul 7 06:09:38.962071 kubelet[2485]: I0707 06:09:38.962036 2485 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:09:38.965007 kubelet[2485]: E0707 06:09:38.963523 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:09:38.965007 kubelet[2485]: E0707 06:09:38.964049 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:09:38.966550 kubelet[2485]: I0707 06:09:38.966478 2485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d7db7788-k7kxf" podStartSLOduration=28.966452422 podStartE2EDuration="28.966452422s" podCreationTimestamp="2025-07-07 06:09:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:09:38.965381733 +0000 UTC m=+43.428236469" watchObservedRunningTime="2025-07-07 06:09:38.966452422 +0000 UTC m=+43.429307158" Jul 7 06:09:38.979682 kubelet[2485]: I0707 06:09:38.978588 2485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-586767dc6-st5cc" podStartSLOduration=27.978569614 podStartE2EDuration="27.978569614s" podCreationTimestamp="2025-07-07 06:09:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:09:38.977992363 +0000 UTC m=+43.440847099" watchObservedRunningTime="2025-07-07 06:09:38.978569614 +0000 UTC m=+43.441424350" Jul 7 06:09:39.339948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4249812756.mount: Deactivated successfully. Jul 7 06:09:39.664561 systemd-networkd[1386]: cali342ddfb41ef: Gained IPv6LL Jul 7 06:09:39.856286 systemd-networkd[1386]: cali19f2fb3a54e: Gained IPv6LL Jul 7 06:09:39.939479 containerd[1443]: time="2025-07-07T06:09:39.939110149Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:39.940831 containerd[1443]: time="2025-07-07T06:09:39.940719477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 7 06:09:39.942288 containerd[1443]: time="2025-07-07T06:09:39.942255074Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:39.946132 containerd[1443]: time="2025-07-07T06:09:39.945845709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:39.946335 containerd[1443]: time="2025-07-07T06:09:39.946302859Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.489072916s" Jul 7 06:09:39.946390 containerd[1443]: time="2025-07-07T06:09:39.946339025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 7 06:09:39.948759 containerd[1443]: time="2025-07-07T06:09:39.947788849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 06:09:39.948840 containerd[1443]: time="2025-07-07T06:09:39.948763039Z" level=info msg="CreateContainer within sandbox \"7b954b61f6843cef2bf1de33289df4254535965e4a136b7bdc3520d0184faecc\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 06:09:39.966396 kubelet[2485]: I0707 06:09:39.966339 2485 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:09:39.966726 kubelet[2485]: E0707 06:09:39.966638 2485 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:09:39.973706 containerd[1443]: time="2025-07-07T06:09:39.973665444Z" level=info msg="CreateContainer within sandbox \"7b954b61f6843cef2bf1de33289df4254535965e4a136b7bdc3520d0184faecc\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"272c6cb997fd5ced9f4524bb74e1294b8e38e558c92a4b41c48e9488f78fc371\"" Jul 7 06:09:39.974183 containerd[1443]: time="2025-07-07T06:09:39.974126875Z" level=info msg="StartContainer for \"272c6cb997fd5ced9f4524bb74e1294b8e38e558c92a4b41c48e9488f78fc371\"" Jul 7 06:09:40.015161 systemd[1]: Started cri-containerd-272c6cb997fd5ced9f4524bb74e1294b8e38e558c92a4b41c48e9488f78fc371.scope - libcontainer container 272c6cb997fd5ced9f4524bb74e1294b8e38e558c92a4b41c48e9488f78fc371. Jul 7 06:09:40.048168 systemd-networkd[1386]: cali05322329496: Gained IPv6LL Jul 7 06:09:40.059466 containerd[1443]: time="2025-07-07T06:09:40.059329002Z" level=info msg="StartContainer for \"272c6cb997fd5ced9f4524bb74e1294b8e38e558c92a4b41c48e9488f78fc371\" returns successfully" Jul 7 06:09:40.399995 kubelet[2485]: I0707 06:09:40.399915 2485 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:09:40.408126 containerd[1443]: time="2025-07-07T06:09:40.408084525Z" level=info msg="StopContainer for \"28bbc12fb70d04193abdce1a9ce9eb9696fd8a525202e1d80be39ea3b9526fa5\" with timeout 30 (s)" Jul 7 06:09:40.409433 containerd[1443]: time="2025-07-07T06:09:40.409321672Z" level=info msg="Stop container \"28bbc12fb70d04193abdce1a9ce9eb9696fd8a525202e1d80be39ea3b9526fa5\" with signal terminated" Jul 7 06:09:40.423350 systemd[1]: cri-containerd-28bbc12fb70d04193abdce1a9ce9eb9696fd8a525202e1d80be39ea3b9526fa5.scope: Deactivated successfully. Jul 7 06:09:40.423628 systemd[1]: cri-containerd-28bbc12fb70d04193abdce1a9ce9eb9696fd8a525202e1d80be39ea3b9526fa5.scope: Consumed 1.085s CPU time. Jul 7 06:09:40.457913 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28bbc12fb70d04193abdce1a9ce9eb9696fd8a525202e1d80be39ea3b9526fa5-rootfs.mount: Deactivated successfully. Jul 7 06:09:40.467423 systemd[1]: Created slice kubepods-besteffort-poda2f40ff4_c1fb_4262_9265_a6fc7e8e82b3.slice - libcontainer container kubepods-besteffort-poda2f40ff4_c1fb_4262_9265_a6fc7e8e82b3.slice. Jul 7 06:09:40.518804 containerd[1443]: time="2025-07-07T06:09:40.516625214Z" level=info msg="shim disconnected" id=28bbc12fb70d04193abdce1a9ce9eb9696fd8a525202e1d80be39ea3b9526fa5 namespace=k8s.io Jul 7 06:09:40.518804 containerd[1443]: time="2025-07-07T06:09:40.518805784Z" level=warning msg="cleaning up after shim disconnected" id=28bbc12fb70d04193abdce1a9ce9eb9696fd8a525202e1d80be39ea3b9526fa5 namespace=k8s.io Jul 7 06:09:40.519064 containerd[1443]: time="2025-07-07T06:09:40.518818746Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:09:40.529782 containerd[1443]: time="2025-07-07T06:09:40.529724794Z" level=warning msg="cleanup warnings time=\"2025-07-07T06:09:40Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 7 06:09:40.534191 containerd[1443]: time="2025-07-07T06:09:40.534148383Z" level=info msg="StopContainer for \"28bbc12fb70d04193abdce1a9ce9eb9696fd8a525202e1d80be39ea3b9526fa5\" returns successfully" Jul 7 06:09:40.534705 containerd[1443]: time="2025-07-07T06:09:40.534674423Z" level=info msg="StopPodSandbox for \"59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131\"" Jul 7 06:09:40.534759 containerd[1443]: time="2025-07-07T06:09:40.534720350Z" level=info msg="Container to stop \"28bbc12fb70d04193abdce1a9ce9eb9696fd8a525202e1d80be39ea3b9526fa5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 06:09:40.537340 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131-shm.mount: Deactivated successfully. Jul 7 06:09:40.541856 systemd[1]: cri-containerd-59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131.scope: Deactivated successfully. Jul 7 06:09:40.555157 kubelet[2485]: I0707 06:09:40.555123 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a2f40ff4-c1fb-4262-9265-a6fc7e8e82b3-calico-apiserver-certs\") pod \"calico-apiserver-586767dc6-h2m2h\" (UID: \"a2f40ff4-c1fb-4262-9265-a6fc7e8e82b3\") " pod="calico-apiserver/calico-apiserver-586767dc6-h2m2h" Jul 7 06:09:40.555290 kubelet[2485]: I0707 06:09:40.555170 2485 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz2k7\" (UniqueName: \"kubernetes.io/projected/a2f40ff4-c1fb-4262-9265-a6fc7e8e82b3-kube-api-access-dz2k7\") pod \"calico-apiserver-586767dc6-h2m2h\" (UID: \"a2f40ff4-c1fb-4262-9265-a6fc7e8e82b3\") " pod="calico-apiserver/calico-apiserver-586767dc6-h2m2h" Jul 7 06:09:40.561641 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131-rootfs.mount: Deactivated successfully. Jul 7 06:09:40.566726 containerd[1443]: time="2025-07-07T06:09:40.566600009Z" level=info msg="shim disconnected" id=59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131 namespace=k8s.io Jul 7 06:09:40.566726 containerd[1443]: time="2025-07-07T06:09:40.566656178Z" level=warning msg="cleaning up after shim disconnected" id=59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131 namespace=k8s.io Jul 7 06:09:40.566726 containerd[1443]: time="2025-07-07T06:09:40.566664459Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:09:40.637737 systemd-networkd[1386]: caliacecd82815c: Link DOWN Jul 7 06:09:40.637927 systemd-networkd[1386]: caliacecd82815c: Lost carrier Jul 7 06:09:40.767313 containerd[1443]: 2025-07-07 06:09:40.635 [INFO][5633] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" Jul 7 06:09:40.767313 containerd[1443]: 2025-07-07 06:09:40.636 [INFO][5633] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" iface="eth0" netns="/var/run/netns/cni-e3a7dac8-7fb3-3886-d708-ac5e56d37bef" Jul 7 06:09:40.767313 containerd[1443]: 2025-07-07 06:09:40.636 [INFO][5633] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" iface="eth0" netns="/var/run/netns/cni-e3a7dac8-7fb3-3886-d708-ac5e56d37bef" Jul 7 06:09:40.767313 containerd[1443]: 2025-07-07 06:09:40.655 [INFO][5633] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" after=19.090606ms iface="eth0" netns="/var/run/netns/cni-e3a7dac8-7fb3-3886-d708-ac5e56d37bef" Jul 7 06:09:40.767313 containerd[1443]: 2025-07-07 06:09:40.655 [INFO][5633] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" Jul 7 06:09:40.767313 containerd[1443]: 2025-07-07 06:09:40.655 [INFO][5633] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" Jul 7 06:09:40.767313 containerd[1443]: 2025-07-07 06:09:40.685 [INFO][5647] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" HandleID="k8s-pod-network.59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" Workload="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" Jul 7 06:09:40.767313 containerd[1443]: 2025-07-07 06:09:40.685 [INFO][5647] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:40.767313 containerd[1443]: 2025-07-07 06:09:40.685 [INFO][5647] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:40.767313 containerd[1443]: 2025-07-07 06:09:40.760 [INFO][5647] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" HandleID="k8s-pod-network.59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" Workload="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" Jul 7 06:09:40.767313 containerd[1443]: 2025-07-07 06:09:40.760 [INFO][5647] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" HandleID="k8s-pod-network.59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" Workload="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" Jul 7 06:09:40.767313 containerd[1443]: 2025-07-07 06:09:40.762 [INFO][5647] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:40.767313 containerd[1443]: 2025-07-07 06:09:40.764 [INFO][5633] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" Jul 7 06:09:40.767723 containerd[1443]: time="2025-07-07T06:09:40.767571191Z" level=info msg="TearDown network for sandbox \"59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131\" successfully" Jul 7 06:09:40.767723 containerd[1443]: time="2025-07-07T06:09:40.767624839Z" level=info msg="StopPodSandbox for \"59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131\" returns successfully" Jul 7 06:09:40.768666 containerd[1443]: time="2025-07-07T06:09:40.768622550Z" level=info msg="StopPodSandbox for \"c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a\"" Jul 7 06:09:40.772679 containerd[1443]: time="2025-07-07T06:09:40.772644918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-586767dc6-h2m2h,Uid:a2f40ff4-c1fb-4262-9265-a6fc7e8e82b3,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:09:40.885856 containerd[1443]: 2025-07-07 06:09:40.824 [WARNING][5666] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0", GenerateName:"calico-apiserver-5d7db7788-", Namespace:"calico-apiserver", SelfLink:"", UID:"c16f1b03-a360-4556-a60c-eadfcd16ef1e", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7db7788", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131", Pod:"calico-apiserver-5d7db7788-2k4fl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliacecd82815c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:40.885856 containerd[1443]: 2025-07-07 06:09:40.826 [INFO][5666] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" Jul 7 06:09:40.885856 containerd[1443]: 2025-07-07 06:09:40.826 [INFO][5666] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" iface="eth0" netns="" Jul 7 06:09:40.885856 containerd[1443]: 2025-07-07 06:09:40.826 [INFO][5666] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" Jul 7 06:09:40.885856 containerd[1443]: 2025-07-07 06:09:40.826 [INFO][5666] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" Jul 7 06:09:40.885856 containerd[1443]: 2025-07-07 06:09:40.848 [INFO][5689] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" HandleID="k8s-pod-network.c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" Workload="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" Jul 7 06:09:40.885856 containerd[1443]: 2025-07-07 06:09:40.849 [INFO][5689] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:40.885856 containerd[1443]: 2025-07-07 06:09:40.849 [INFO][5689] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:40.885856 containerd[1443]: 2025-07-07 06:09:40.871 [WARNING][5689] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" HandleID="k8s-pod-network.c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" Workload="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" Jul 7 06:09:40.885856 containerd[1443]: 2025-07-07 06:09:40.871 [INFO][5689] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" HandleID="k8s-pod-network.c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" Workload="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" Jul 7 06:09:40.885856 containerd[1443]: 2025-07-07 06:09:40.876 [INFO][5689] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:40.885856 containerd[1443]: 2025-07-07 06:09:40.881 [INFO][5666] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" Jul 7 06:09:40.885856 containerd[1443]: time="2025-07-07T06:09:40.885561949Z" level=info msg="TearDown network for sandbox \"c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a\" successfully" Jul 7 06:09:40.887539 containerd[1443]: time="2025-07-07T06:09:40.885586552Z" level=info msg="StopPodSandbox for \"c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a\" returns successfully" Jul 7 06:09:40.946435 systemd-networkd[1386]: calieefc0032b0c: Link UP Jul 7 06:09:40.947058 systemd-networkd[1386]: calieefc0032b0c: Gained carrier Jul 7 06:09:40.958548 kubelet[2485]: I0707 06:09:40.958223 2485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4smm\" (UniqueName: \"kubernetes.io/projected/c16f1b03-a360-4556-a60c-eadfcd16ef1e-kube-api-access-m4smm\") pod \"c16f1b03-a360-4556-a60c-eadfcd16ef1e\" (UID: \"c16f1b03-a360-4556-a60c-eadfcd16ef1e\") " Jul 7 06:09:40.958548 kubelet[2485]: I0707 06:09:40.958280 2485 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c16f1b03-a360-4556-a60c-eadfcd16ef1e-calico-apiserver-certs\") pod \"c16f1b03-a360-4556-a60c-eadfcd16ef1e\" (UID: \"c16f1b03-a360-4556-a60c-eadfcd16ef1e\") " Jul 7 06:09:40.964015 containerd[1443]: 2025-07-07 06:09:40.846 [INFO][5673] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--586767dc6--h2m2h-eth0 calico-apiserver-586767dc6- calico-apiserver a2f40ff4-c1fb-4262-9265-a6fc7e8e82b3 1149 0 2025-07-07 06:09:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:586767dc6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-586767dc6-h2m2h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calieefc0032b0c [] [] }} ContainerID="6da999729e5b1a678b7a435a1d8f8f658bdd719defabb6002aa9fb8f2cd29da9" Namespace="calico-apiserver" Pod="calico-apiserver-586767dc6-h2m2h" WorkloadEndpoint="localhost-k8s-calico--apiserver--586767dc6--h2m2h-" Jul 7 06:09:40.964015 containerd[1443]: 2025-07-07 06:09:40.847 [INFO][5673] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6da999729e5b1a678b7a435a1d8f8f658bdd719defabb6002aa9fb8f2cd29da9" Namespace="calico-apiserver" Pod="calico-apiserver-586767dc6-h2m2h" WorkloadEndpoint="localhost-k8s-calico--apiserver--586767dc6--h2m2h-eth0" Jul 7 06:09:40.964015 containerd[1443]: 2025-07-07 06:09:40.885 [INFO][5697] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6da999729e5b1a678b7a435a1d8f8f658bdd719defabb6002aa9fb8f2cd29da9" HandleID="k8s-pod-network.6da999729e5b1a678b7a435a1d8f8f658bdd719defabb6002aa9fb8f2cd29da9" Workload="localhost-k8s-calico--apiserver--586767dc6--h2m2h-eth0" Jul 7 06:09:40.964015 containerd[1443]: 2025-07-07 06:09:40.885 [INFO][5697] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6da999729e5b1a678b7a435a1d8f8f658bdd719defabb6002aa9fb8f2cd29da9" HandleID="k8s-pod-network.6da999729e5b1a678b7a435a1d8f8f658bdd719defabb6002aa9fb8f2cd29da9" Workload="localhost-k8s-calico--apiserver--586767dc6--h2m2h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001b74b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-586767dc6-h2m2h", "timestamp":"2025-07-07 06:09:40.885538985 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:09:40.964015 containerd[1443]: 2025-07-07 06:09:40.885 [INFO][5697] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:40.964015 containerd[1443]: 2025-07-07 06:09:40.885 [INFO][5697] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:40.964015 containerd[1443]: 2025-07-07 06:09:40.886 [INFO][5697] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:09:40.964015 containerd[1443]: 2025-07-07 06:09:40.900 [INFO][5697] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6da999729e5b1a678b7a435a1d8f8f658bdd719defabb6002aa9fb8f2cd29da9" host="localhost" Jul 7 06:09:40.964015 containerd[1443]: 2025-07-07 06:09:40.906 [INFO][5697] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:09:40.964015 containerd[1443]: 2025-07-07 06:09:40.915 [INFO][5697] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:09:40.964015 containerd[1443]: 2025-07-07 06:09:40.920 [INFO][5697] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:09:40.964015 containerd[1443]: 2025-07-07 06:09:40.924 [INFO][5697] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:09:40.964015 containerd[1443]: 2025-07-07 06:09:40.924 [INFO][5697] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6da999729e5b1a678b7a435a1d8f8f658bdd719defabb6002aa9fb8f2cd29da9" host="localhost" Jul 7 06:09:40.964015 containerd[1443]: 2025-07-07 06:09:40.926 [INFO][5697] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6da999729e5b1a678b7a435a1d8f8f658bdd719defabb6002aa9fb8f2cd29da9 Jul 7 06:09:40.964015 containerd[1443]: 2025-07-07 06:09:40.933 [INFO][5697] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6da999729e5b1a678b7a435a1d8f8f658bdd719defabb6002aa9fb8f2cd29da9" host="localhost" Jul 7 06:09:40.964015 containerd[1443]: 2025-07-07 06:09:40.941 [INFO][5697] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.138/26] block=192.168.88.128/26 handle="k8s-pod-network.6da999729e5b1a678b7a435a1d8f8f658bdd719defabb6002aa9fb8f2cd29da9" host="localhost" Jul 7 06:09:40.964015 containerd[1443]: 2025-07-07 06:09:40.941 [INFO][5697] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.138/26] handle="k8s-pod-network.6da999729e5b1a678b7a435a1d8f8f658bdd719defabb6002aa9fb8f2cd29da9" host="localhost" Jul 7 06:09:40.964015 containerd[1443]: 2025-07-07 06:09:40.941 [INFO][5697] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:40.964015 containerd[1443]: 2025-07-07 06:09:40.941 [INFO][5697] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.138/26] IPv6=[] ContainerID="6da999729e5b1a678b7a435a1d8f8f658bdd719defabb6002aa9fb8f2cd29da9" HandleID="k8s-pod-network.6da999729e5b1a678b7a435a1d8f8f658bdd719defabb6002aa9fb8f2cd29da9" Workload="localhost-k8s-calico--apiserver--586767dc6--h2m2h-eth0" Jul 7 06:09:40.964795 containerd[1443]: 2025-07-07 06:09:40.943 [INFO][5673] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6da999729e5b1a678b7a435a1d8f8f658bdd719defabb6002aa9fb8f2cd29da9" Namespace="calico-apiserver" Pod="calico-apiserver-586767dc6-h2m2h" WorkloadEndpoint="localhost-k8s-calico--apiserver--586767dc6--h2m2h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--586767dc6--h2m2h-eth0", GenerateName:"calico-apiserver-586767dc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"a2f40ff4-c1fb-4262-9265-a6fc7e8e82b3", ResourceVersion:"1149", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"586767dc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-586767dc6-h2m2h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieefc0032b0c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:40.964795 containerd[1443]: 2025-07-07 06:09:40.943 [INFO][5673] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.138/32] ContainerID="6da999729e5b1a678b7a435a1d8f8f658bdd719defabb6002aa9fb8f2cd29da9" Namespace="calico-apiserver" Pod="calico-apiserver-586767dc6-h2m2h" WorkloadEndpoint="localhost-k8s-calico--apiserver--586767dc6--h2m2h-eth0" Jul 7 06:09:40.964795 containerd[1443]: 2025-07-07 06:09:40.943 [INFO][5673] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieefc0032b0c ContainerID="6da999729e5b1a678b7a435a1d8f8f658bdd719defabb6002aa9fb8f2cd29da9" Namespace="calico-apiserver" Pod="calico-apiserver-586767dc6-h2m2h" WorkloadEndpoint="localhost-k8s-calico--apiserver--586767dc6--h2m2h-eth0" Jul 7 06:09:40.964795 containerd[1443]: 2025-07-07 06:09:40.945 [INFO][5673] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6da999729e5b1a678b7a435a1d8f8f658bdd719defabb6002aa9fb8f2cd29da9" Namespace="calico-apiserver" Pod="calico-apiserver-586767dc6-h2m2h" WorkloadEndpoint="localhost-k8s-calico--apiserver--586767dc6--h2m2h-eth0" Jul 7 06:09:40.964795 containerd[1443]: 2025-07-07 06:09:40.946 [INFO][5673] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6da999729e5b1a678b7a435a1d8f8f658bdd719defabb6002aa9fb8f2cd29da9" Namespace="calico-apiserver" Pod="calico-apiserver-586767dc6-h2m2h" WorkloadEndpoint="localhost-k8s-calico--apiserver--586767dc6--h2m2h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--586767dc6--h2m2h-eth0", GenerateName:"calico-apiserver-586767dc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"a2f40ff4-c1fb-4262-9265-a6fc7e8e82b3", ResourceVersion:"1149", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"586767dc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6da999729e5b1a678b7a435a1d8f8f658bdd719defabb6002aa9fb8f2cd29da9", Pod:"calico-apiserver-586767dc6-h2m2h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieefc0032b0c", MAC:"c6:87:bb:b3:54:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:40.964795 containerd[1443]: 2025-07-07 06:09:40.955 [INFO][5673] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6da999729e5b1a678b7a435a1d8f8f658bdd719defabb6002aa9fb8f2cd29da9" Namespace="calico-apiserver" Pod="calico-apiserver-586767dc6-h2m2h" WorkloadEndpoint="localhost-k8s-calico--apiserver--586767dc6--h2m2h-eth0" Jul 7 06:09:40.967726 kubelet[2485]: I0707 06:09:40.965498 2485 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c16f1b03-a360-4556-a60c-eadfcd16ef1e-kube-api-access-m4smm" (OuterVolumeSpecName: "kube-api-access-m4smm") pod "c16f1b03-a360-4556-a60c-eadfcd16ef1e" (UID: "c16f1b03-a360-4556-a60c-eadfcd16ef1e"). InnerVolumeSpecName "kube-api-access-m4smm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 06:09:40.969287 kubelet[2485]: I0707 06:09:40.967772 2485 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c16f1b03-a360-4556-a60c-eadfcd16ef1e-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "c16f1b03-a360-4556-a60c-eadfcd16ef1e" (UID: "c16f1b03-a360-4556-a60c-eadfcd16ef1e"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 06:09:40.974579 kubelet[2485]: I0707 06:09:40.974298 2485 scope.go:117] "RemoveContainer" containerID="28bbc12fb70d04193abdce1a9ce9eb9696fd8a525202e1d80be39ea3b9526fa5" Jul 7 06:09:40.978524 containerd[1443]: time="2025-07-07T06:09:40.978430668Z" level=info msg="RemoveContainer for \"28bbc12fb70d04193abdce1a9ce9eb9696fd8a525202e1d80be39ea3b9526fa5\"" Jul 7 06:09:40.987033 systemd[1]: Removed slice kubepods-besteffort-podc16f1b03_a360_4556_a60c_eadfcd16ef1e.slice - libcontainer container kubepods-besteffort-podc16f1b03_a360_4556_a60c_eadfcd16ef1e.slice. Jul 7 06:09:40.987127 systemd[1]: kubepods-besteffort-podc16f1b03_a360_4556_a60c_eadfcd16ef1e.slice: Consumed 1.099s CPU time. Jul 7 06:09:40.992038 containerd[1443]: time="2025-07-07T06:09:40.991822773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:09:40.992038 containerd[1443]: time="2025-07-07T06:09:40.991895744Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:09:40.992038 containerd[1443]: time="2025-07-07T06:09:40.991908346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:40.994202 containerd[1443]: time="2025-07-07T06:09:40.994091556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:09:41.001049 kubelet[2485]: I0707 06:09:41.000988 2485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-lmf8d" podStartSLOduration=23.219099892 podStartE2EDuration="27.000945472s" podCreationTimestamp="2025-07-07 06:09:14 +0000 UTC" firstStartedPulling="2025-07-07 06:09:36.165428669 +0000 UTC m=+40.628283405" lastFinishedPulling="2025-07-07 06:09:39.947274249 +0000 UTC m=+44.410128985" observedRunningTime="2025-07-07 06:09:41.000586338 +0000 UTC m=+45.463441074" watchObservedRunningTime="2025-07-07 06:09:41.000945472 +0000 UTC m=+45.463800208" Jul 7 06:09:41.002485 containerd[1443]: time="2025-07-07T06:09:41.001054248Z" level=info msg="RemoveContainer for \"28bbc12fb70d04193abdce1a9ce9eb9696fd8a525202e1d80be39ea3b9526fa5\" returns successfully" Jul 7 06:09:41.003788 kubelet[2485]: I0707 06:09:41.001868 2485 scope.go:117] "RemoveContainer" containerID="28bbc12fb70d04193abdce1a9ce9eb9696fd8a525202e1d80be39ea3b9526fa5" Jul 7 06:09:41.004124 containerd[1443]: time="2025-07-07T06:09:41.004045611Z" level=error msg="ContainerStatus for \"28bbc12fb70d04193abdce1a9ce9eb9696fd8a525202e1d80be39ea3b9526fa5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"28bbc12fb70d04193abdce1a9ce9eb9696fd8a525202e1d80be39ea3b9526fa5\": not found" Jul 7 06:09:41.004591 kubelet[2485]: E0707 06:09:41.004435 2485 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"28bbc12fb70d04193abdce1a9ce9eb9696fd8a525202e1d80be39ea3b9526fa5\": not found" containerID="28bbc12fb70d04193abdce1a9ce9eb9696fd8a525202e1d80be39ea3b9526fa5" Jul 7 06:09:41.004591 kubelet[2485]: I0707 06:09:41.004474 2485 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"28bbc12fb70d04193abdce1a9ce9eb9696fd8a525202e1d80be39ea3b9526fa5"} err="failed to get container status \"28bbc12fb70d04193abdce1a9ce9eb9696fd8a525202e1d80be39ea3b9526fa5\": rpc error: code = NotFound desc = an error occurred when try to find container \"28bbc12fb70d04193abdce1a9ce9eb9696fd8a525202e1d80be39ea3b9526fa5\": not found" Jul 7 06:09:41.012416 containerd[1443]: time="2025-07-07T06:09:41.011862889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:41.016142 containerd[1443]: time="2025-07-07T06:09:41.014582532Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 7 06:09:41.016457 containerd[1443]: time="2025-07-07T06:09:41.016359076Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:41.022504 containerd[1443]: time="2025-07-07T06:09:41.022182739Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:41.023457 containerd[1443]: time="2025-07-07T06:09:41.023221372Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.075399838s" Jul 7 06:09:41.023457 containerd[1443]: time="2025-07-07T06:09:41.023347031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 7 06:09:41.026733 containerd[1443]: time="2025-07-07T06:09:41.026355077Z" level=info msg="CreateContainer within sandbox \"338c3a0b35ce1f6160acb162d80c7ed860ddf6f14a2faa22652f75e3f14a1908\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 06:09:41.029190 systemd[1]: Started cri-containerd-6da999729e5b1a678b7a435a1d8f8f658bdd719defabb6002aa9fb8f2cd29da9.scope - libcontainer container 6da999729e5b1a678b7a435a1d8f8f658bdd719defabb6002aa9fb8f2cd29da9. Jul 7 06:09:41.040813 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:09:41.046118 containerd[1443]: time="2025-07-07T06:09:41.046077639Z" level=info msg="CreateContainer within sandbox \"338c3a0b35ce1f6160acb162d80c7ed860ddf6f14a2faa22652f75e3f14a1908\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5687c2dcad282fd78113921b2ebe36735dda6af5f6716fea705d7467260ff6d8\"" Jul 7 06:09:41.047134 containerd[1443]: time="2025-07-07T06:09:41.047093029Z" level=info msg="StartContainer for \"5687c2dcad282fd78113921b2ebe36735dda6af5f6716fea705d7467260ff6d8\"" Jul 7 06:09:41.059112 kubelet[2485]: I0707 06:09:41.059076 2485 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m4smm\" (UniqueName: \"kubernetes.io/projected/c16f1b03-a360-4556-a60c-eadfcd16ef1e-kube-api-access-m4smm\") on node \"localhost\" DevicePath \"\"" Jul 7 06:09:41.059502 kubelet[2485]: I0707 06:09:41.059301 2485 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c16f1b03-a360-4556-a60c-eadfcd16ef1e-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Jul 7 06:09:41.067018 containerd[1443]: time="2025-07-07T06:09:41.066767705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-586767dc6-h2m2h,Uid:a2f40ff4-c1fb-4262-9265-a6fc7e8e82b3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6da999729e5b1a678b7a435a1d8f8f658bdd719defabb6002aa9fb8f2cd29da9\"" Jul 7 06:09:41.069542 containerd[1443]: time="2025-07-07T06:09:41.069505870Z" level=info msg="CreateContainer within sandbox \"6da999729e5b1a678b7a435a1d8f8f658bdd719defabb6002aa9fb8f2cd29da9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:09:41.080141 systemd[1]: Started cri-containerd-5687c2dcad282fd78113921b2ebe36735dda6af5f6716fea705d7467260ff6d8.scope - libcontainer container 5687c2dcad282fd78113921b2ebe36735dda6af5f6716fea705d7467260ff6d8. Jul 7 06:09:41.093318 containerd[1443]: time="2025-07-07T06:09:41.093067681Z" level=info msg="CreateContainer within sandbox \"6da999729e5b1a678b7a435a1d8f8f658bdd719defabb6002aa9fb8f2cd29da9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"edf06323d5553440eabdf10a2d34d9d0c138b20065995f78fdb24c5f992193c4\"" Jul 7 06:09:41.097247 containerd[1443]: time="2025-07-07T06:09:41.097218776Z" level=info msg="StartContainer for \"edf06323d5553440eabdf10a2d34d9d0c138b20065995f78fdb24c5f992193c4\"" Jul 7 06:09:41.111362 containerd[1443]: time="2025-07-07T06:09:41.111315585Z" level=info msg="StartContainer for \"5687c2dcad282fd78113921b2ebe36735dda6af5f6716fea705d7467260ff6d8\" returns successfully" Jul 7 06:09:41.112742 containerd[1443]: time="2025-07-07T06:09:41.112710072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 06:09:41.138267 systemd[1]: Started cri-containerd-edf06323d5553440eabdf10a2d34d9d0c138b20065995f78fdb24c5f992193c4.scope - libcontainer container edf06323d5553440eabdf10a2d34d9d0c138b20065995f78fdb24c5f992193c4. Jul 7 06:09:41.173845 containerd[1443]: time="2025-07-07T06:09:41.173736634Z" level=info msg="StartContainer for \"edf06323d5553440eabdf10a2d34d9d0c138b20065995f78fdb24c5f992193c4\" returns successfully" Jul 7 06:09:41.461113 systemd[1]: run-netns-cni\x2de3a7dac8\x2d7fb3\x2d3886\x2dd708\x2dac5e56d37bef.mount: Deactivated successfully. Jul 7 06:09:41.461712 systemd[1]: var-lib-kubelet-pods-c16f1b03\x2da360\x2d4556\x2da60c\x2deadfcd16ef1e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm4smm.mount: Deactivated successfully. Jul 7 06:09:41.461792 systemd[1]: var-lib-kubelet-pods-c16f1b03\x2da360\x2d4556\x2da60c\x2deadfcd16ef1e-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 7 06:09:41.627299 kubelet[2485]: I0707 06:09:41.627252 2485 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c16f1b03-a360-4556-a60c-eadfcd16ef1e" path="/var/lib/kubelet/pods/c16f1b03-a360-4556-a60c-eadfcd16ef1e/volumes" Jul 7 06:09:41.997686 kubelet[2485]: I0707 06:09:41.997185 2485 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:09:42.009230 kubelet[2485]: I0707 06:09:42.008511 2485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-586767dc6-h2m2h" podStartSLOduration=2.008498816 podStartE2EDuration="2.008498816s" podCreationTimestamp="2025-07-07 06:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:09:42.007814317 +0000 UTC m=+46.470669053" watchObservedRunningTime="2025-07-07 06:09:42.008498816 +0000 UTC m=+46.471353552" Jul 7 06:09:42.123821 kubelet[2485]: I0707 06:09:42.123785 2485 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:09:42.144723 containerd[1443]: time="2025-07-07T06:09:42.144670288Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:42.151043 containerd[1443]: time="2025-07-07T06:09:42.150996808Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 7 06:09:42.152074 containerd[1443]: time="2025-07-07T06:09:42.152023597Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:42.157400 containerd[1443]: time="2025-07-07T06:09:42.157148062Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:09:42.157871 containerd[1443]: time="2025-07-07T06:09:42.157831321Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.045087205s" Jul 7 06:09:42.157943 containerd[1443]: time="2025-07-07T06:09:42.157864806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 7 06:09:42.165351 containerd[1443]: time="2025-07-07T06:09:42.165192231Z" level=info msg="CreateContainer within sandbox \"338c3a0b35ce1f6160acb162d80c7ed860ddf6f14a2faa22652f75e3f14a1908\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 06:09:42.181130 containerd[1443]: time="2025-07-07T06:09:42.180455769Z" level=info msg="CreateContainer within sandbox \"338c3a0b35ce1f6160acb162d80c7ed860ddf6f14a2faa22652f75e3f14a1908\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"78f6cdb7df715c7234de9c8a46ec6798dc7d9df1de1838a008f3ba4f0c645569\"" Jul 7 06:09:42.183113 containerd[1443]: time="2025-07-07T06:09:42.181790603Z" level=info msg="StartContainer for \"78f6cdb7df715c7234de9c8a46ec6798dc7d9df1de1838a008f3ba4f0c645569\"" Jul 7 06:09:42.213281 systemd[1]: run-containerd-runc-k8s.io-78f6cdb7df715c7234de9c8a46ec6798dc7d9df1de1838a008f3ba4f0c645569-runc.Cnl4WR.mount: Deactivated successfully. Jul 7 06:09:42.224138 systemd[1]: Started cri-containerd-78f6cdb7df715c7234de9c8a46ec6798dc7d9df1de1838a008f3ba4f0c645569.scope - libcontainer container 78f6cdb7df715c7234de9c8a46ec6798dc7d9df1de1838a008f3ba4f0c645569. Jul 7 06:09:42.224388 systemd-networkd[1386]: calieefc0032b0c: Gained IPv6LL Jul 7 06:09:42.275297 containerd[1443]: time="2025-07-07T06:09:42.275214182Z" level=info msg="StartContainer for \"78f6cdb7df715c7234de9c8a46ec6798dc7d9df1de1838a008f3ba4f0c645569\" returns successfully" Jul 7 06:09:42.702544 kubelet[2485]: I0707 06:09:42.702433 2485 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 06:09:42.706127 kubelet[2485]: I0707 06:09:42.706092 2485 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 06:09:43.001264 kubelet[2485]: I0707 06:09:43.001166 2485 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:09:43.052059 kubelet[2485]: I0707 06:09:43.051995 2485 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-txlwn" podStartSLOduration=24.960053692 podStartE2EDuration="29.051976906s" podCreationTimestamp="2025-07-07 06:09:14 +0000 UTC" firstStartedPulling="2025-07-07 06:09:38.069251073 +0000 UTC m=+42.532105809" lastFinishedPulling="2025-07-07 06:09:42.161174287 +0000 UTC m=+46.624029023" observedRunningTime="2025-07-07 06:09:43.051041812 +0000 UTC m=+47.513896588" watchObservedRunningTime="2025-07-07 06:09:43.051976906 +0000 UTC m=+47.514831642" Jul 7 06:09:43.955205 systemd[1]: Started sshd@9-10.0.0.114:22-10.0.0.1:34290.service - OpenSSH per-connection server daemon (10.0.0.1:34290). Jul 7 06:09:44.002542 sshd[5957]: Accepted publickey for core from 10.0.0.1 port 34290 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:09:44.004486 sshd[5957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:09:44.008198 systemd-logind[1425]: New session 10 of user core. Jul 7 06:09:44.020137 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 06:09:44.428181 sshd[5957]: pam_unix(sshd:session): session closed for user core Jul 7 06:09:44.437626 systemd[1]: sshd@9-10.0.0.114:22-10.0.0.1:34290.service: Deactivated successfully. Jul 7 06:09:44.439312 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 06:09:44.441128 systemd-logind[1425]: Session 10 logged out. Waiting for processes to exit. Jul 7 06:09:44.441917 systemd[1]: Started sshd@10-10.0.0.114:22-10.0.0.1:34304.service - OpenSSH per-connection server daemon (10.0.0.1:34304). Jul 7 06:09:44.442827 systemd-logind[1425]: Removed session 10. Jul 7 06:09:44.477654 sshd[5972]: Accepted publickey for core from 10.0.0.1 port 34304 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:09:44.478911 sshd[5972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:09:44.483029 systemd-logind[1425]: New session 11 of user core. Jul 7 06:09:44.489118 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 06:09:44.759502 sshd[5972]: pam_unix(sshd:session): session closed for user core Jul 7 06:09:44.769684 systemd[1]: sshd@10-10.0.0.114:22-10.0.0.1:34304.service: Deactivated successfully. Jul 7 06:09:44.771936 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 06:09:44.773352 systemd-logind[1425]: Session 11 logged out. Waiting for processes to exit. Jul 7 06:09:44.774675 systemd[1]: Started sshd@11-10.0.0.114:22-10.0.0.1:34316.service - OpenSSH per-connection server daemon (10.0.0.1:34316). Jul 7 06:09:44.775598 systemd-logind[1425]: Removed session 11. Jul 7 06:09:44.819660 sshd[5984]: Accepted publickey for core from 10.0.0.1 port 34316 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:09:44.821259 sshd[5984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:09:44.825558 systemd-logind[1425]: New session 12 of user core. Jul 7 06:09:44.833150 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 06:09:44.960421 sshd[5984]: pam_unix(sshd:session): session closed for user core Jul 7 06:09:44.964006 systemd[1]: sshd@11-10.0.0.114:22-10.0.0.1:34316.service: Deactivated successfully. Jul 7 06:09:44.965836 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 06:09:44.966520 systemd-logind[1425]: Session 12 logged out. Waiting for processes to exit. Jul 7 06:09:44.967396 systemd-logind[1425]: Removed session 12. Jul 7 06:09:47.335066 kubelet[2485]: I0707 06:09:47.334953 2485 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:09:49.974734 systemd[1]: Started sshd@12-10.0.0.114:22-10.0.0.1:34324.service - OpenSSH per-connection server daemon (10.0.0.1:34324). Jul 7 06:09:50.018940 sshd[6098]: Accepted publickey for core from 10.0.0.1 port 34324 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:09:50.020509 sshd[6098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:09:50.024407 systemd-logind[1425]: New session 13 of user core. Jul 7 06:09:50.033105 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 06:09:50.201185 sshd[6098]: pam_unix(sshd:session): session closed for user core Jul 7 06:09:50.212593 systemd[1]: sshd@12-10.0.0.114:22-10.0.0.1:34324.service: Deactivated successfully. Jul 7 06:09:50.214436 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 06:09:50.216788 systemd-logind[1425]: Session 13 logged out. Waiting for processes to exit. Jul 7 06:09:50.222219 systemd[1]: Started sshd@13-10.0.0.114:22-10.0.0.1:34336.service - OpenSSH per-connection server daemon (10.0.0.1:34336). Jul 7 06:09:50.223111 systemd-logind[1425]: Removed session 13. Jul 7 06:09:50.256808 sshd[6112]: Accepted publickey for core from 10.0.0.1 port 34336 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:09:50.258326 sshd[6112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:09:50.262038 systemd-logind[1425]: New session 14 of user core. Jul 7 06:09:50.273124 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 06:09:50.455854 sshd[6112]: pam_unix(sshd:session): session closed for user core Jul 7 06:09:50.469432 systemd[1]: sshd@13-10.0.0.114:22-10.0.0.1:34336.service: Deactivated successfully. Jul 7 06:09:50.471371 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 06:09:50.472863 systemd-logind[1425]: Session 14 logged out. Waiting for processes to exit. Jul 7 06:09:50.474349 systemd[1]: Started sshd@14-10.0.0.114:22-10.0.0.1:34352.service - OpenSSH per-connection server daemon (10.0.0.1:34352). Jul 7 06:09:50.476153 systemd-logind[1425]: Removed session 14. Jul 7 06:09:50.515341 sshd[6125]: Accepted publickey for core from 10.0.0.1 port 34352 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:09:50.516713 sshd[6125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:09:50.520893 systemd-logind[1425]: New session 15 of user core. Jul 7 06:09:50.531108 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 06:09:51.269432 sshd[6125]: pam_unix(sshd:session): session closed for user core Jul 7 06:09:51.278504 systemd[1]: sshd@14-10.0.0.114:22-10.0.0.1:34352.service: Deactivated successfully. Jul 7 06:09:51.281370 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 06:09:51.286739 systemd-logind[1425]: Session 15 logged out. Waiting for processes to exit. Jul 7 06:09:51.292301 systemd[1]: Started sshd@15-10.0.0.114:22-10.0.0.1:34356.service - OpenSSH per-connection server daemon (10.0.0.1:34356). Jul 7 06:09:51.294520 systemd-logind[1425]: Removed session 15. Jul 7 06:09:51.330711 sshd[6148]: Accepted publickey for core from 10.0.0.1 port 34356 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:09:51.332075 sshd[6148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:09:51.336373 systemd-logind[1425]: New session 16 of user core. Jul 7 06:09:51.347117 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 06:09:51.732103 sshd[6148]: pam_unix(sshd:session): session closed for user core Jul 7 06:09:51.742366 systemd[1]: sshd@15-10.0.0.114:22-10.0.0.1:34356.service: Deactivated successfully. Jul 7 06:09:51.743914 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 06:09:51.745225 systemd-logind[1425]: Session 16 logged out. Waiting for processes to exit. Jul 7 06:09:51.756261 systemd[1]: Started sshd@16-10.0.0.114:22-10.0.0.1:34362.service - OpenSSH per-connection server daemon (10.0.0.1:34362). Jul 7 06:09:51.757439 systemd-logind[1425]: Removed session 16. Jul 7 06:09:51.791439 sshd[6161]: Accepted publickey for core from 10.0.0.1 port 34362 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:09:51.792745 sshd[6161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:09:51.796477 systemd-logind[1425]: New session 17 of user core. Jul 7 06:09:51.805124 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 06:09:51.934700 sshd[6161]: pam_unix(sshd:session): session closed for user core Jul 7 06:09:51.938320 systemd[1]: sshd@16-10.0.0.114:22-10.0.0.1:34362.service: Deactivated successfully. Jul 7 06:09:51.940136 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 06:09:51.940691 systemd-logind[1425]: Session 17 logged out. Waiting for processes to exit. Jul 7 06:09:51.941481 systemd-logind[1425]: Removed session 17. Jul 7 06:09:55.606628 containerd[1443]: time="2025-07-07T06:09:55.606262801Z" level=info msg="StopPodSandbox for \"59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131\"" Jul 7 06:09:55.721038 containerd[1443]: 2025-07-07 06:09:55.667 [WARNING][6183] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" Jul 7 06:09:55.721038 containerd[1443]: 2025-07-07 06:09:55.667 [INFO][6183] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" Jul 7 06:09:55.721038 containerd[1443]: 2025-07-07 06:09:55.667 [INFO][6183] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" iface="eth0" netns="" Jul 7 06:09:55.721038 containerd[1443]: 2025-07-07 06:09:55.667 [INFO][6183] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" Jul 7 06:09:55.721038 containerd[1443]: 2025-07-07 06:09:55.667 [INFO][6183] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" Jul 7 06:09:55.721038 containerd[1443]: 2025-07-07 06:09:55.704 [INFO][6194] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" HandleID="k8s-pod-network.59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" Workload="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" Jul 7 06:09:55.721038 containerd[1443]: 2025-07-07 06:09:55.705 [INFO][6194] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:55.721038 containerd[1443]: 2025-07-07 06:09:55.705 [INFO][6194] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:55.721038 containerd[1443]: 2025-07-07 06:09:55.715 [WARNING][6194] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" HandleID="k8s-pod-network.59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" Workload="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" Jul 7 06:09:55.721038 containerd[1443]: 2025-07-07 06:09:55.715 [INFO][6194] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" HandleID="k8s-pod-network.59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" Workload="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" Jul 7 06:09:55.721038 containerd[1443]: 2025-07-07 06:09:55.717 [INFO][6194] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:55.721038 containerd[1443]: 2025-07-07 06:09:55.719 [INFO][6183] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" Jul 7 06:09:55.721601 containerd[1443]: time="2025-07-07T06:09:55.721067728Z" level=info msg="TearDown network for sandbox \"59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131\" successfully" Jul 7 06:09:55.721601 containerd[1443]: time="2025-07-07T06:09:55.721093572Z" level=info msg="StopPodSandbox for \"59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131\" returns successfully" Jul 7 06:09:55.721727 containerd[1443]: time="2025-07-07T06:09:55.721668921Z" level=info msg="RemovePodSandbox for \"59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131\"" Jul 7 06:09:55.730676 containerd[1443]: time="2025-07-07T06:09:55.730612726Z" level=info msg="Forcibly stopping sandbox \"59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131\"" Jul 7 06:09:55.806993 containerd[1443]: 2025-07-07 06:09:55.763 [WARNING][6212] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" Jul 7 06:09:55.806993 containerd[1443]: 2025-07-07 06:09:55.763 [INFO][6212] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" Jul 7 06:09:55.806993 containerd[1443]: 2025-07-07 06:09:55.763 [INFO][6212] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" iface="eth0" netns="" Jul 7 06:09:55.806993 containerd[1443]: 2025-07-07 06:09:55.763 [INFO][6212] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" Jul 7 06:09:55.806993 containerd[1443]: 2025-07-07 06:09:55.763 [INFO][6212] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" Jul 7 06:09:55.806993 containerd[1443]: 2025-07-07 06:09:55.793 [INFO][6221] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" HandleID="k8s-pod-network.59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" Workload="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" Jul 7 06:09:55.806993 containerd[1443]: 2025-07-07 06:09:55.793 [INFO][6221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:55.806993 containerd[1443]: 2025-07-07 06:09:55.793 [INFO][6221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:55.806993 containerd[1443]: 2025-07-07 06:09:55.801 [WARNING][6221] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" HandleID="k8s-pod-network.59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" Workload="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" Jul 7 06:09:55.806993 containerd[1443]: 2025-07-07 06:09:55.801 [INFO][6221] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" HandleID="k8s-pod-network.59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" Workload="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" Jul 7 06:09:55.806993 containerd[1443]: 2025-07-07 06:09:55.803 [INFO][6221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:55.806993 containerd[1443]: 2025-07-07 06:09:55.805 [INFO][6212] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131" Jul 7 06:09:55.807378 containerd[1443]: time="2025-07-07T06:09:55.807032437Z" level=info msg="TearDown network for sandbox \"59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131\" successfully" Jul 7 06:09:55.821075 containerd[1443]: time="2025-07-07T06:09:55.821020454Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:09:55.821178 containerd[1443]: time="2025-07-07T06:09:55.821151430Z" level=info msg="RemovePodSandbox \"59bf294c7b656dc8e65586af04768c0e4986d92d3ffdd66d45294e10755ef131\" returns successfully" Jul 7 06:09:55.821817 containerd[1443]: time="2025-07-07T06:09:55.821778946Z" level=info msg="StopPodSandbox for \"c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a\"" Jul 7 06:09:55.890624 containerd[1443]: 2025-07-07 06:09:55.856 [WARNING][6239] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" Jul 7 06:09:55.890624 containerd[1443]: 2025-07-07 06:09:55.856 [INFO][6239] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" Jul 7 06:09:55.890624 containerd[1443]: 2025-07-07 06:09:55.856 [INFO][6239] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" iface="eth0" netns="" Jul 7 06:09:55.890624 containerd[1443]: 2025-07-07 06:09:55.856 [INFO][6239] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" Jul 7 06:09:55.890624 containerd[1443]: 2025-07-07 06:09:55.856 [INFO][6239] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" Jul 7 06:09:55.890624 containerd[1443]: 2025-07-07 06:09:55.874 [INFO][6247] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" HandleID="k8s-pod-network.c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" Workload="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" Jul 7 06:09:55.890624 containerd[1443]: 2025-07-07 06:09:55.874 [INFO][6247] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:55.890624 containerd[1443]: 2025-07-07 06:09:55.874 [INFO][6247] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:55.890624 containerd[1443]: 2025-07-07 06:09:55.885 [WARNING][6247] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" HandleID="k8s-pod-network.c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" Workload="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" Jul 7 06:09:55.890624 containerd[1443]: 2025-07-07 06:09:55.885 [INFO][6247] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" HandleID="k8s-pod-network.c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" Workload="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" Jul 7 06:09:55.890624 containerd[1443]: 2025-07-07 06:09:55.886 [INFO][6247] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:55.890624 containerd[1443]: 2025-07-07 06:09:55.888 [INFO][6239] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" Jul 7 06:09:55.890624 containerd[1443]: time="2025-07-07T06:09:55.890587293Z" level=info msg="TearDown network for sandbox \"c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a\" successfully" Jul 7 06:09:55.890624 containerd[1443]: time="2025-07-07T06:09:55.890608815Z" level=info msg="StopPodSandbox for \"c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a\" returns successfully" Jul 7 06:09:55.892397 containerd[1443]: time="2025-07-07T06:09:55.892206569Z" level=info msg="RemovePodSandbox for \"c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a\"" Jul 7 06:09:55.892397 containerd[1443]: time="2025-07-07T06:09:55.892248814Z" level=info msg="Forcibly stopping sandbox \"c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a\"" Jul 7 06:09:55.974998 containerd[1443]: 2025-07-07 06:09:55.935 [WARNING][6266] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" Jul 7 06:09:55.974998 containerd[1443]: 2025-07-07 06:09:55.935 [INFO][6266] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" Jul 7 06:09:55.974998 containerd[1443]: 2025-07-07 06:09:55.935 [INFO][6266] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" iface="eth0" netns="" Jul 7 06:09:55.974998 containerd[1443]: 2025-07-07 06:09:55.935 [INFO][6266] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" Jul 7 06:09:55.974998 containerd[1443]: 2025-07-07 06:09:55.935 [INFO][6266] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" Jul 7 06:09:55.974998 containerd[1443]: 2025-07-07 06:09:55.957 [INFO][6277] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" HandleID="k8s-pod-network.c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" Workload="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" Jul 7 06:09:55.974998 containerd[1443]: 2025-07-07 06:09:55.959 [INFO][6277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:55.974998 containerd[1443]: 2025-07-07 06:09:55.959 [INFO][6277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:55.974998 containerd[1443]: 2025-07-07 06:09:55.968 [WARNING][6277] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" HandleID="k8s-pod-network.c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" Workload="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" Jul 7 06:09:55.974998 containerd[1443]: 2025-07-07 06:09:55.968 [INFO][6277] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" HandleID="k8s-pod-network.c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" Workload="localhost-k8s-calico--apiserver--5d7db7788--2k4fl-eth0" Jul 7 06:09:55.974998 containerd[1443]: 2025-07-07 06:09:55.970 [INFO][6277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:55.974998 containerd[1443]: 2025-07-07 06:09:55.971 [INFO][6266] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a" Jul 7 06:09:55.974998 containerd[1443]: time="2025-07-07T06:09:55.973857474Z" level=info msg="TearDown network for sandbox \"c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a\" successfully" Jul 7 06:09:55.976609 containerd[1443]: time="2025-07-07T06:09:55.976566803Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:09:55.976664 containerd[1443]: time="2025-07-07T06:09:55.976640732Z" level=info msg="RemovePodSandbox \"c09ad4e56c522a506797380b6361aeeb5ec4f7c8b49397d6145faf4892a1de4a\" returns successfully" Jul 7 06:09:55.977333 containerd[1443]: time="2025-07-07T06:09:55.977075065Z" level=info msg="StopPodSandbox for \"da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97\"" Jul 7 06:09:56.043673 containerd[1443]: 2025-07-07 06:09:56.011 [WARNING][6294] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--m7xfp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6a09f92b-f03b-46c8-9b26-0233f582bf66", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7fa77294d4de98937978ce54ff6af61db8e4b4f4331a66263dcc82f7ed97c5b5", Pod:"coredns-668d6bf9bc-m7xfp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali89bd0bda239", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:56.043673 containerd[1443]: 2025-07-07 06:09:56.011 [INFO][6294] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" Jul 7 06:09:56.043673 containerd[1443]: 2025-07-07 06:09:56.011 [INFO][6294] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" iface="eth0" netns="" Jul 7 06:09:56.043673 containerd[1443]: 2025-07-07 06:09:56.011 [INFO][6294] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" Jul 7 06:09:56.043673 containerd[1443]: 2025-07-07 06:09:56.011 [INFO][6294] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" Jul 7 06:09:56.043673 containerd[1443]: 2025-07-07 06:09:56.029 [INFO][6303] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" HandleID="k8s-pod-network.da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" Workload="localhost-k8s-coredns--668d6bf9bc--m7xfp-eth0" Jul 7 06:09:56.043673 containerd[1443]: 2025-07-07 06:09:56.030 [INFO][6303] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:56.043673 containerd[1443]: 2025-07-07 06:09:56.030 [INFO][6303] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:56.043673 containerd[1443]: 2025-07-07 06:09:56.038 [WARNING][6303] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" HandleID="k8s-pod-network.da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" Workload="localhost-k8s-coredns--668d6bf9bc--m7xfp-eth0" Jul 7 06:09:56.043673 containerd[1443]: 2025-07-07 06:09:56.038 [INFO][6303] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" HandleID="k8s-pod-network.da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" Workload="localhost-k8s-coredns--668d6bf9bc--m7xfp-eth0" Jul 7 06:09:56.043673 containerd[1443]: 2025-07-07 06:09:56.040 [INFO][6303] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:56.043673 containerd[1443]: 2025-07-07 06:09:56.042 [INFO][6294] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" Jul 7 06:09:56.044320 containerd[1443]: time="2025-07-07T06:09:56.044199718Z" level=info msg="TearDown network for sandbox \"da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97\" successfully" Jul 7 06:09:56.044320 containerd[1443]: time="2025-07-07T06:09:56.044229242Z" level=info msg="StopPodSandbox for \"da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97\" returns successfully" Jul 7 06:09:56.044708 containerd[1443]: time="2025-07-07T06:09:56.044682216Z" level=info msg="RemovePodSandbox for \"da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97\"" Jul 7 06:09:56.044766 containerd[1443]: time="2025-07-07T06:09:56.044726181Z" level=info msg="Forcibly stopping sandbox \"da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97\"" Jul 7 06:09:56.110801 containerd[1443]: 2025-07-07 06:09:56.078 [WARNING][6322] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--m7xfp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6a09f92b-f03b-46c8-9b26-0233f582bf66", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7fa77294d4de98937978ce54ff6af61db8e4b4f4331a66263dcc82f7ed97c5b5", Pod:"coredns-668d6bf9bc-m7xfp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali89bd0bda239", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:56.110801 containerd[1443]: 2025-07-07 06:09:56.078 [INFO][6322] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" Jul 7 06:09:56.110801 containerd[1443]: 2025-07-07 06:09:56.078 [INFO][6322] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" iface="eth0" netns="" Jul 7 06:09:56.110801 containerd[1443]: 2025-07-07 06:09:56.078 [INFO][6322] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" Jul 7 06:09:56.110801 containerd[1443]: 2025-07-07 06:09:56.078 [INFO][6322] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" Jul 7 06:09:56.110801 containerd[1443]: 2025-07-07 06:09:56.098 [INFO][6331] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" HandleID="k8s-pod-network.da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" Workload="localhost-k8s-coredns--668d6bf9bc--m7xfp-eth0" Jul 7 06:09:56.110801 containerd[1443]: 2025-07-07 06:09:56.098 [INFO][6331] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:56.110801 containerd[1443]: 2025-07-07 06:09:56.098 [INFO][6331] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:56.110801 containerd[1443]: 2025-07-07 06:09:56.106 [WARNING][6331] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" HandleID="k8s-pod-network.da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" Workload="localhost-k8s-coredns--668d6bf9bc--m7xfp-eth0" Jul 7 06:09:56.110801 containerd[1443]: 2025-07-07 06:09:56.106 [INFO][6331] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" HandleID="k8s-pod-network.da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" Workload="localhost-k8s-coredns--668d6bf9bc--m7xfp-eth0" Jul 7 06:09:56.110801 containerd[1443]: 2025-07-07 06:09:56.107 [INFO][6331] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:56.110801 containerd[1443]: 2025-07-07 06:09:56.109 [INFO][6322] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97" Jul 7 06:09:56.111230 containerd[1443]: time="2025-07-07T06:09:56.110840446Z" level=info msg="TearDown network for sandbox \"da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97\" successfully" Jul 7 06:09:56.113735 containerd[1443]: time="2025-07-07T06:09:56.113691269Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:09:56.113796 containerd[1443]: time="2025-07-07T06:09:56.113764437Z" level=info msg="RemovePodSandbox \"da8a5a84cd63137ee8e43e7fcc11bb4c8019278f51df3f93cf65f729851dde97\" returns successfully" Jul 7 06:09:56.114356 containerd[1443]: time="2025-07-07T06:09:56.114331625Z" level=info msg="StopPodSandbox for \"0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a\"" Jul 7 06:09:56.182035 containerd[1443]: 2025-07-07 06:09:56.146 [WARNING][6349] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--586767dc6--st5cc-eth0", GenerateName:"calico-apiserver-586767dc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"20b82c11-f0c8-4cab-bff0-a1f67bee9ab4", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"586767dc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"79649b17cb9db823c3ae29c3c9f1cfdbe6675a58e90730191082d0f61d08b10f", Pod:"calico-apiserver-586767dc6-st5cc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali05322329496", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:56.182035 containerd[1443]: 2025-07-07 06:09:56.147 [INFO][6349] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" Jul 7 06:09:56.182035 containerd[1443]: 2025-07-07 06:09:56.147 [INFO][6349] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" iface="eth0" netns="" Jul 7 06:09:56.182035 containerd[1443]: 2025-07-07 06:09:56.147 [INFO][6349] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" Jul 7 06:09:56.182035 containerd[1443]: 2025-07-07 06:09:56.147 [INFO][6349] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" Jul 7 06:09:56.182035 containerd[1443]: 2025-07-07 06:09:56.165 [INFO][6358] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" HandleID="k8s-pod-network.0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" Workload="localhost-k8s-calico--apiserver--586767dc6--st5cc-eth0" Jul 7 06:09:56.182035 containerd[1443]: 2025-07-07 06:09:56.165 [INFO][6358] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:56.182035 containerd[1443]: 2025-07-07 06:09:56.165 [INFO][6358] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:56.182035 containerd[1443]: 2025-07-07 06:09:56.176 [WARNING][6358] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" HandleID="k8s-pod-network.0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" Workload="localhost-k8s-calico--apiserver--586767dc6--st5cc-eth0" Jul 7 06:09:56.182035 containerd[1443]: 2025-07-07 06:09:56.176 [INFO][6358] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" HandleID="k8s-pod-network.0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" Workload="localhost-k8s-calico--apiserver--586767dc6--st5cc-eth0" Jul 7 06:09:56.182035 containerd[1443]: 2025-07-07 06:09:56.178 [INFO][6358] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:56.182035 containerd[1443]: 2025-07-07 06:09:56.180 [INFO][6349] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" Jul 7 06:09:56.182035 containerd[1443]: time="2025-07-07T06:09:56.182001837Z" level=info msg="TearDown network for sandbox \"0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a\" successfully" Jul 7 06:09:56.182035 containerd[1443]: time="2025-07-07T06:09:56.182036721Z" level=info msg="StopPodSandbox for \"0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a\" returns successfully" Jul 7 06:09:56.182711 containerd[1443]: time="2025-07-07T06:09:56.182522100Z" level=info msg="RemovePodSandbox for \"0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a\"" Jul 7 06:09:56.182711 containerd[1443]: time="2025-07-07T06:09:56.182554064Z" level=info msg="Forcibly stopping sandbox \"0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a\"" Jul 7 06:09:56.247059 containerd[1443]: 2025-07-07 06:09:56.214 [WARNING][6376] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--586767dc6--st5cc-eth0", GenerateName:"calico-apiserver-586767dc6-", Namespace:"calico-apiserver", SelfLink:"", UID:"20b82c11-f0c8-4cab-bff0-a1f67bee9ab4", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"586767dc6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"79649b17cb9db823c3ae29c3c9f1cfdbe6675a58e90730191082d0f61d08b10f", Pod:"calico-apiserver-586767dc6-st5cc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali05322329496", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:56.247059 containerd[1443]: 2025-07-07 06:09:56.215 [INFO][6376] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" Jul 7 06:09:56.247059 containerd[1443]: 2025-07-07 06:09:56.215 [INFO][6376] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" iface="eth0" netns="" Jul 7 06:09:56.247059 containerd[1443]: 2025-07-07 06:09:56.215 [INFO][6376] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" Jul 7 06:09:56.247059 containerd[1443]: 2025-07-07 06:09:56.215 [INFO][6376] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" Jul 7 06:09:56.247059 containerd[1443]: 2025-07-07 06:09:56.234 [INFO][6385] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" HandleID="k8s-pod-network.0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" Workload="localhost-k8s-calico--apiserver--586767dc6--st5cc-eth0" Jul 7 06:09:56.247059 containerd[1443]: 2025-07-07 06:09:56.234 [INFO][6385] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:56.247059 containerd[1443]: 2025-07-07 06:09:56.234 [INFO][6385] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:56.247059 containerd[1443]: 2025-07-07 06:09:56.242 [WARNING][6385] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" HandleID="k8s-pod-network.0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" Workload="localhost-k8s-calico--apiserver--586767dc6--st5cc-eth0" Jul 7 06:09:56.247059 containerd[1443]: 2025-07-07 06:09:56.242 [INFO][6385] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" HandleID="k8s-pod-network.0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" Workload="localhost-k8s-calico--apiserver--586767dc6--st5cc-eth0" Jul 7 06:09:56.247059 containerd[1443]: 2025-07-07 06:09:56.243 [INFO][6385] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:56.247059 containerd[1443]: 2025-07-07 06:09:56.245 [INFO][6376] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a" Jul 7 06:09:56.247474 containerd[1443]: time="2025-07-07T06:09:56.247092259Z" level=info msg="TearDown network for sandbox \"0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a\" successfully" Jul 7 06:09:56.249980 containerd[1443]: time="2025-07-07T06:09:56.249937961Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:09:56.250042 containerd[1443]: time="2025-07-07T06:09:56.250014490Z" level=info msg="RemovePodSandbox \"0c3a632ef28fe5d5e0f393965d5e8b8721f57a9d63a9032db8bf41bb6450c01a\" returns successfully" Jul 7 06:09:56.250750 containerd[1443]: time="2025-07-07T06:09:56.250461584Z" level=info msg="StopPodSandbox for \"781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f\"" Jul 7 06:09:56.312257 containerd[1443]: 2025-07-07 06:09:56.281 [WARNING][6402] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--txlwn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5705c009-0d57-436d-b155-b8ac4388465f", ResourceVersion:"1203", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"338c3a0b35ce1f6160acb162d80c7ed860ddf6f14a2faa22652f75e3f14a1908", Pod:"csi-node-driver-txlwn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali342ddfb41ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:56.312257 containerd[1443]: 2025-07-07 06:09:56.281 [INFO][6402] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" Jul 7 06:09:56.312257 containerd[1443]: 2025-07-07 06:09:56.281 [INFO][6402] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" iface="eth0" netns="" Jul 7 06:09:56.312257 containerd[1443]: 2025-07-07 06:09:56.281 [INFO][6402] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" Jul 7 06:09:56.312257 containerd[1443]: 2025-07-07 06:09:56.281 [INFO][6402] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" Jul 7 06:09:56.312257 containerd[1443]: 2025-07-07 06:09:56.299 [INFO][6410] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" HandleID="k8s-pod-network.781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" Workload="localhost-k8s-csi--node--driver--txlwn-eth0" Jul 7 06:09:56.312257 containerd[1443]: 2025-07-07 06:09:56.299 [INFO][6410] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:56.312257 containerd[1443]: 2025-07-07 06:09:56.299 [INFO][6410] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:56.312257 containerd[1443]: 2025-07-07 06:09:56.307 [WARNING][6410] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" HandleID="k8s-pod-network.781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" Workload="localhost-k8s-csi--node--driver--txlwn-eth0" Jul 7 06:09:56.312257 containerd[1443]: 2025-07-07 06:09:56.307 [INFO][6410] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" HandleID="k8s-pod-network.781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" Workload="localhost-k8s-csi--node--driver--txlwn-eth0" Jul 7 06:09:56.312257 containerd[1443]: 2025-07-07 06:09:56.308 [INFO][6410] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:56.312257 containerd[1443]: 2025-07-07 06:09:56.310 [INFO][6402] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" Jul 7 06:09:56.312695 containerd[1443]: time="2025-07-07T06:09:56.312293974Z" level=info msg="TearDown network for sandbox \"781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f\" successfully" Jul 7 06:09:56.312695 containerd[1443]: time="2025-07-07T06:09:56.312318377Z" level=info msg="StopPodSandbox for \"781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f\" returns successfully" Jul 7 06:09:56.312801 containerd[1443]: time="2025-07-07T06:09:56.312738387Z" level=info msg="RemovePodSandbox for \"781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f\"" Jul 7 06:09:56.312801 containerd[1443]: time="2025-07-07T06:09:56.312784193Z" level=info msg="Forcibly stopping sandbox \"781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f\"" Jul 7 06:09:56.378101 containerd[1443]: 2025-07-07 06:09:56.344 [WARNING][6427] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--txlwn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5705c009-0d57-436d-b155-b8ac4388465f", ResourceVersion:"1203", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"338c3a0b35ce1f6160acb162d80c7ed860ddf6f14a2faa22652f75e3f14a1908", Pod:"csi-node-driver-txlwn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali342ddfb41ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:56.378101 containerd[1443]: 2025-07-07 06:09:56.344 [INFO][6427] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" Jul 7 06:09:56.378101 containerd[1443]: 2025-07-07 06:09:56.344 [INFO][6427] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" iface="eth0" netns="" Jul 7 06:09:56.378101 containerd[1443]: 2025-07-07 06:09:56.344 [INFO][6427] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" Jul 7 06:09:56.378101 containerd[1443]: 2025-07-07 06:09:56.344 [INFO][6427] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" Jul 7 06:09:56.378101 containerd[1443]: 2025-07-07 06:09:56.361 [INFO][6436] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" HandleID="k8s-pod-network.781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" Workload="localhost-k8s-csi--node--driver--txlwn-eth0" Jul 7 06:09:56.378101 containerd[1443]: 2025-07-07 06:09:56.362 [INFO][6436] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:56.378101 containerd[1443]: 2025-07-07 06:09:56.362 [INFO][6436] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:56.378101 containerd[1443]: 2025-07-07 06:09:56.372 [WARNING][6436] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" HandleID="k8s-pod-network.781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" Workload="localhost-k8s-csi--node--driver--txlwn-eth0" Jul 7 06:09:56.378101 containerd[1443]: 2025-07-07 06:09:56.372 [INFO][6436] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" HandleID="k8s-pod-network.781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" Workload="localhost-k8s-csi--node--driver--txlwn-eth0" Jul 7 06:09:56.378101 containerd[1443]: 2025-07-07 06:09:56.374 [INFO][6436] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:56.378101 containerd[1443]: 2025-07-07 06:09:56.376 [INFO][6427] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f" Jul 7 06:09:56.378544 containerd[1443]: time="2025-07-07T06:09:56.378129885Z" level=info msg="TearDown network for sandbox \"781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f\" successfully" Jul 7 06:09:56.386722 containerd[1443]: time="2025-07-07T06:09:56.386677872Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:09:56.386825 containerd[1443]: time="2025-07-07T06:09:56.386775444Z" level=info msg="RemovePodSandbox \"781bc85e86afd7f80bc1d7e2c9eca9581fd1c2abecb9d223ba984b61ed7e8a4f\" returns successfully" Jul 7 06:09:56.387280 containerd[1443]: time="2025-07-07T06:09:56.387246781Z" level=info msg="StopPodSandbox for \"c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35\"" Jul 7 06:09:56.450051 containerd[1443]: 2025-07-07 06:09:56.418 [WARNING][6454] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7fbfd84b85--7qqnq-eth0", GenerateName:"calico-kube-controllers-7fbfd84b85-", Namespace:"calico-system", SelfLink:"", UID:"058206b3-65d3-47c5-ac92-f4a3b7ef1d3d", ResourceVersion:"1254", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fbfd84b85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"107574b8741ff8591c2cf2d0917064b9f3278cd8a11b89029909e8ce3a006ecc", Pod:"calico-kube-controllers-7fbfd84b85-7qqnq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali800d6a288b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:56.450051 containerd[1443]: 2025-07-07 06:09:56.418 [INFO][6454] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" Jul 7 06:09:56.450051 containerd[1443]: 2025-07-07 06:09:56.418 [INFO][6454] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" iface="eth0" netns="" Jul 7 06:09:56.450051 containerd[1443]: 2025-07-07 06:09:56.418 [INFO][6454] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" Jul 7 06:09:56.450051 containerd[1443]: 2025-07-07 06:09:56.418 [INFO][6454] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" Jul 7 06:09:56.450051 containerd[1443]: 2025-07-07 06:09:56.436 [INFO][6463] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" HandleID="k8s-pod-network.c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" Workload="localhost-k8s-calico--kube--controllers--7fbfd84b85--7qqnq-eth0" Jul 7 06:09:56.450051 containerd[1443]: 2025-07-07 06:09:56.436 [INFO][6463] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:56.450051 containerd[1443]: 2025-07-07 06:09:56.436 [INFO][6463] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:56.450051 containerd[1443]: 2025-07-07 06:09:56.445 [WARNING][6463] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" HandleID="k8s-pod-network.c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" Workload="localhost-k8s-calico--kube--controllers--7fbfd84b85--7qqnq-eth0" Jul 7 06:09:56.450051 containerd[1443]: 2025-07-07 06:09:56.445 [INFO][6463] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" HandleID="k8s-pod-network.c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" Workload="localhost-k8s-calico--kube--controllers--7fbfd84b85--7qqnq-eth0" Jul 7 06:09:56.450051 containerd[1443]: 2025-07-07 06:09:56.446 [INFO][6463] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:56.450051 containerd[1443]: 2025-07-07 06:09:56.448 [INFO][6454] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" Jul 7 06:09:56.450051 containerd[1443]: time="2025-07-07T06:09:56.450029125Z" level=info msg="TearDown network for sandbox \"c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35\" successfully" Jul 7 06:09:56.450491 containerd[1443]: time="2025-07-07T06:09:56.450064049Z" level=info msg="StopPodSandbox for \"c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35\" returns successfully" Jul 7 06:09:56.450777 containerd[1443]: time="2025-07-07T06:09:56.450729929Z" level=info msg="RemovePodSandbox for \"c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35\"" Jul 7 06:09:56.450777 containerd[1443]: time="2025-07-07T06:09:56.450771094Z" level=info msg="Forcibly stopping sandbox \"c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35\"" Jul 7 06:09:56.521143 containerd[1443]: 2025-07-07 06:09:56.488 [WARNING][6480] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7fbfd84b85--7qqnq-eth0", GenerateName:"calico-kube-controllers-7fbfd84b85-", Namespace:"calico-system", SelfLink:"", UID:"058206b3-65d3-47c5-ac92-f4a3b7ef1d3d", ResourceVersion:"1254", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fbfd84b85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"107574b8741ff8591c2cf2d0917064b9f3278cd8a11b89029909e8ce3a006ecc", Pod:"calico-kube-controllers-7fbfd84b85-7qqnq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali800d6a288b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:56.521143 containerd[1443]: 2025-07-07 06:09:56.488 [INFO][6480] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" Jul 7 06:09:56.521143 containerd[1443]: 2025-07-07 06:09:56.488 [INFO][6480] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" iface="eth0" netns="" Jul 7 06:09:56.521143 containerd[1443]: 2025-07-07 06:09:56.488 [INFO][6480] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" Jul 7 06:09:56.521143 containerd[1443]: 2025-07-07 06:09:56.488 [INFO][6480] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" Jul 7 06:09:56.521143 containerd[1443]: 2025-07-07 06:09:56.507 [INFO][6489] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" HandleID="k8s-pod-network.c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" Workload="localhost-k8s-calico--kube--controllers--7fbfd84b85--7qqnq-eth0" Jul 7 06:09:56.521143 containerd[1443]: 2025-07-07 06:09:56.507 [INFO][6489] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:56.521143 containerd[1443]: 2025-07-07 06:09:56.507 [INFO][6489] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:56.521143 containerd[1443]: 2025-07-07 06:09:56.515 [WARNING][6489] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" HandleID="k8s-pod-network.c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" Workload="localhost-k8s-calico--kube--controllers--7fbfd84b85--7qqnq-eth0" Jul 7 06:09:56.521143 containerd[1443]: 2025-07-07 06:09:56.516 [INFO][6489] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" HandleID="k8s-pod-network.c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" Workload="localhost-k8s-calico--kube--controllers--7fbfd84b85--7qqnq-eth0" Jul 7 06:09:56.521143 containerd[1443]: 2025-07-07 06:09:56.517 [INFO][6489] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:56.521143 containerd[1443]: 2025-07-07 06:09:56.519 [INFO][6480] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35" Jul 7 06:09:56.521545 containerd[1443]: time="2025-07-07T06:09:56.521182235Z" level=info msg="TearDown network for sandbox \"c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35\" successfully" Jul 7 06:09:56.524183 containerd[1443]: time="2025-07-07T06:09:56.524144431Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:09:56.524249 containerd[1443]: time="2025-07-07T06:09:56.524209439Z" level=info msg="RemovePodSandbox \"c3fd5594a0d50ee0c0d7715760f42e1ca9d1f209e1cfb280eb7778ee7c1ebc35\" returns successfully" Jul 7 06:09:56.524711 containerd[1443]: time="2025-07-07T06:09:56.524685856Z" level=info msg="StopPodSandbox for \"3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529\"" Jul 7 06:09:56.589789 containerd[1443]: 2025-07-07 06:09:56.557 [WARNING][6507] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d7db7788--k7kxf-eth0", GenerateName:"calico-apiserver-5d7db7788-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b552d16-8bbf-4c9c-b453-0c942c087079", ResourceVersion:"1188", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7db7788", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e79b6a98f0d7872cfb29cf7486c28f309ccbdd8c8127548beace891b05fdfdf4", Pod:"calico-apiserver-5d7db7788-k7kxf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali19f2fb3a54e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:56.589789 containerd[1443]: 2025-07-07 06:09:56.557 [INFO][6507] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" Jul 7 06:09:56.589789 containerd[1443]: 2025-07-07 06:09:56.557 [INFO][6507] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" iface="eth0" netns="" Jul 7 06:09:56.589789 containerd[1443]: 2025-07-07 06:09:56.557 [INFO][6507] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" Jul 7 06:09:56.589789 containerd[1443]: 2025-07-07 06:09:56.557 [INFO][6507] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" Jul 7 06:09:56.589789 containerd[1443]: 2025-07-07 06:09:56.576 [INFO][6515] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" HandleID="k8s-pod-network.3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" Workload="localhost-k8s-calico--apiserver--5d7db7788--k7kxf-eth0" Jul 7 06:09:56.589789 containerd[1443]: 2025-07-07 06:09:56.576 [INFO][6515] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:56.589789 containerd[1443]: 2025-07-07 06:09:56.576 [INFO][6515] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:56.589789 containerd[1443]: 2025-07-07 06:09:56.584 [WARNING][6515] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" HandleID="k8s-pod-network.3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" Workload="localhost-k8s-calico--apiserver--5d7db7788--k7kxf-eth0" Jul 7 06:09:56.589789 containerd[1443]: 2025-07-07 06:09:56.584 [INFO][6515] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" HandleID="k8s-pod-network.3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" Workload="localhost-k8s-calico--apiserver--5d7db7788--k7kxf-eth0" Jul 7 06:09:56.589789 containerd[1443]: 2025-07-07 06:09:56.585 [INFO][6515] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:56.589789 containerd[1443]: 2025-07-07 06:09:56.587 [INFO][6507] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" Jul 7 06:09:56.590214 containerd[1443]: time="2025-07-07T06:09:56.589836125Z" level=info msg="TearDown network for sandbox \"3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529\" successfully" Jul 7 06:09:56.590214 containerd[1443]: time="2025-07-07T06:09:56.589863808Z" level=info msg="StopPodSandbox for \"3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529\" returns successfully" Jul 7 06:09:56.590642 containerd[1443]: time="2025-07-07T06:09:56.590617579Z" level=info msg="RemovePodSandbox for \"3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529\"" Jul 7 06:09:56.590693 containerd[1443]: time="2025-07-07T06:09:56.590652503Z" level=info msg="Forcibly stopping sandbox \"3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529\"" Jul 7 06:09:56.657457 containerd[1443]: 2025-07-07 06:09:56.626 [WARNING][6532] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d7db7788--k7kxf-eth0", GenerateName:"calico-apiserver-5d7db7788-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b552d16-8bbf-4c9c-b453-0c942c087079", ResourceVersion:"1188", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7db7788", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e79b6a98f0d7872cfb29cf7486c28f309ccbdd8c8127548beace891b05fdfdf4", Pod:"calico-apiserver-5d7db7788-k7kxf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali19f2fb3a54e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:56.657457 containerd[1443]: 2025-07-07 06:09:56.627 [INFO][6532] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" Jul 7 06:09:56.657457 containerd[1443]: 2025-07-07 06:09:56.627 [INFO][6532] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" iface="eth0" netns="" Jul 7 06:09:56.657457 containerd[1443]: 2025-07-07 06:09:56.627 [INFO][6532] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" Jul 7 06:09:56.657457 containerd[1443]: 2025-07-07 06:09:56.627 [INFO][6532] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" Jul 7 06:09:56.657457 containerd[1443]: 2025-07-07 06:09:56.644 [INFO][6541] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" HandleID="k8s-pod-network.3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" Workload="localhost-k8s-calico--apiserver--5d7db7788--k7kxf-eth0" Jul 7 06:09:56.657457 containerd[1443]: 2025-07-07 06:09:56.644 [INFO][6541] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:56.657457 containerd[1443]: 2025-07-07 06:09:56.644 [INFO][6541] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:56.657457 containerd[1443]: 2025-07-07 06:09:56.652 [WARNING][6541] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" HandleID="k8s-pod-network.3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" Workload="localhost-k8s-calico--apiserver--5d7db7788--k7kxf-eth0" Jul 7 06:09:56.657457 containerd[1443]: 2025-07-07 06:09:56.652 [INFO][6541] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" HandleID="k8s-pod-network.3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" Workload="localhost-k8s-calico--apiserver--5d7db7788--k7kxf-eth0" Jul 7 06:09:56.657457 containerd[1443]: 2025-07-07 06:09:56.654 [INFO][6541] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:56.657457 containerd[1443]: 2025-07-07 06:09:56.655 [INFO][6532] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529" Jul 7 06:09:56.658167 containerd[1443]: time="2025-07-07T06:09:56.657492495Z" level=info msg="TearDown network for sandbox \"3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529\" successfully" Jul 7 06:09:56.660292 containerd[1443]: time="2025-07-07T06:09:56.660259628Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:09:56.660351 containerd[1443]: time="2025-07-07T06:09:56.660329396Z" level=info msg="RemovePodSandbox \"3f031f85d7526fd768e1a9362b5409a500f2041f7b951b19066775d72818b529\" returns successfully" Jul 7 06:09:56.660863 containerd[1443]: time="2025-07-07T06:09:56.660839857Z" level=info msg="StopPodSandbox for \"a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba\"" Jul 7 06:09:56.730559 containerd[1443]: 2025-07-07 06:09:56.696 [WARNING][6559] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--lmf8d-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"0f7848e9-158e-4510-8474-f086afb371a7", ResourceVersion:"1165", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7b954b61f6843cef2bf1de33289df4254535965e4a136b7bdc3520d0184faecc", Pod:"goldmane-768f4c5c69-lmf8d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliba6f51d8f7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:56.730559 containerd[1443]: 2025-07-07 06:09:56.697 [INFO][6559] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" Jul 7 06:09:56.730559 containerd[1443]: 2025-07-07 06:09:56.697 [INFO][6559] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" iface="eth0" netns="" Jul 7 06:09:56.730559 containerd[1443]: 2025-07-07 06:09:56.697 [INFO][6559] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" Jul 7 06:09:56.730559 containerd[1443]: 2025-07-07 06:09:56.697 [INFO][6559] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" Jul 7 06:09:56.730559 containerd[1443]: 2025-07-07 06:09:56.716 [INFO][6568] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" HandleID="k8s-pod-network.a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" Workload="localhost-k8s-goldmane--768f4c5c69--lmf8d-eth0" Jul 7 06:09:56.730559 containerd[1443]: 2025-07-07 06:09:56.716 [INFO][6568] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:56.730559 containerd[1443]: 2025-07-07 06:09:56.716 [INFO][6568] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:56.730559 containerd[1443]: 2025-07-07 06:09:56.725 [WARNING][6568] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" HandleID="k8s-pod-network.a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" Workload="localhost-k8s-goldmane--768f4c5c69--lmf8d-eth0" Jul 7 06:09:56.730559 containerd[1443]: 2025-07-07 06:09:56.725 [INFO][6568] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" HandleID="k8s-pod-network.a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" Workload="localhost-k8s-goldmane--768f4c5c69--lmf8d-eth0" Jul 7 06:09:56.730559 containerd[1443]: 2025-07-07 06:09:56.726 [INFO][6568] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:56.730559 containerd[1443]: 2025-07-07 06:09:56.728 [INFO][6559] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" Jul 7 06:09:56.730559 containerd[1443]: time="2025-07-07T06:09:56.730541353Z" level=info msg="TearDown network for sandbox \"a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba\" successfully" Jul 7 06:09:56.731122 containerd[1443]: time="2025-07-07T06:09:56.730573957Z" level=info msg="StopPodSandbox for \"a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba\" returns successfully" Jul 7 06:09:56.731122 containerd[1443]: time="2025-07-07T06:09:56.731008169Z" level=info msg="RemovePodSandbox for \"a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba\"" Jul 7 06:09:56.731122 containerd[1443]: time="2025-07-07T06:09:56.731037053Z" level=info msg="Forcibly stopping sandbox \"a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba\"" Jul 7 06:09:56.795604 containerd[1443]: 2025-07-07 06:09:56.763 [WARNING][6586] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--lmf8d-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"0f7848e9-158e-4510-8474-f086afb371a7", ResourceVersion:"1165", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7b954b61f6843cef2bf1de33289df4254535965e4a136b7bdc3520d0184faecc", Pod:"goldmane-768f4c5c69-lmf8d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliba6f51d8f7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:56.795604 containerd[1443]: 2025-07-07 06:09:56.763 [INFO][6586] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" Jul 7 06:09:56.795604 containerd[1443]: 2025-07-07 06:09:56.763 [INFO][6586] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" iface="eth0" netns="" Jul 7 06:09:56.795604 containerd[1443]: 2025-07-07 06:09:56.763 [INFO][6586] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" Jul 7 06:09:56.795604 containerd[1443]: 2025-07-07 06:09:56.763 [INFO][6586] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" Jul 7 06:09:56.795604 containerd[1443]: 2025-07-07 06:09:56.782 [INFO][6595] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" HandleID="k8s-pod-network.a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" Workload="localhost-k8s-goldmane--768f4c5c69--lmf8d-eth0" Jul 7 06:09:56.795604 containerd[1443]: 2025-07-07 06:09:56.782 [INFO][6595] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:56.795604 containerd[1443]: 2025-07-07 06:09:56.782 [INFO][6595] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:56.795604 containerd[1443]: 2025-07-07 06:09:56.791 [WARNING][6595] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" HandleID="k8s-pod-network.a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" Workload="localhost-k8s-goldmane--768f4c5c69--lmf8d-eth0" Jul 7 06:09:56.795604 containerd[1443]: 2025-07-07 06:09:56.791 [INFO][6595] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" HandleID="k8s-pod-network.a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" Workload="localhost-k8s-goldmane--768f4c5c69--lmf8d-eth0" Jul 7 06:09:56.795604 containerd[1443]: 2025-07-07 06:09:56.792 [INFO][6595] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:56.795604 containerd[1443]: 2025-07-07 06:09:56.794 [INFO][6586] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba" Jul 7 06:09:56.796072 containerd[1443]: time="2025-07-07T06:09:56.795628895Z" level=info msg="TearDown network for sandbox \"a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba\" successfully" Jul 7 06:09:56.798400 containerd[1443]: time="2025-07-07T06:09:56.798360863Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:09:56.798458 containerd[1443]: time="2025-07-07T06:09:56.798422070Z" level=info msg="RemovePodSandbox \"a9eb7a0506a817031500ceb14aaf88ca385cb1935d0469fa08ebf373707ad7ba\" returns successfully" Jul 7 06:09:56.799012 containerd[1443]: time="2025-07-07T06:09:56.798989378Z" level=info msg="StopPodSandbox for \"0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1\"" Jul 7 06:09:56.865101 containerd[1443]: 2025-07-07 06:09:56.833 [WARNING][6613] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wtdw2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"acb26600-e422-4fa9-86c9-1e99272ac907", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4139863f28ec524ddc8f268f2ad68bbdaa38d96ef3bfa144f85694445f3178a1", Pod:"coredns-668d6bf9bc-wtdw2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidb351f2839a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:56.865101 containerd[1443]: 2025-07-07 06:09:56.833 [INFO][6613] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" Jul 7 06:09:56.865101 containerd[1443]: 2025-07-07 06:09:56.833 [INFO][6613] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" iface="eth0" netns="" Jul 7 06:09:56.865101 containerd[1443]: 2025-07-07 06:09:56.833 [INFO][6613] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" Jul 7 06:09:56.865101 containerd[1443]: 2025-07-07 06:09:56.833 [INFO][6613] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" Jul 7 06:09:56.865101 containerd[1443]: 2025-07-07 06:09:56.851 [INFO][6622] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" HandleID="k8s-pod-network.0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" Workload="localhost-k8s-coredns--668d6bf9bc--wtdw2-eth0" Jul 7 06:09:56.865101 containerd[1443]: 2025-07-07 06:09:56.851 [INFO][6622] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:56.865101 containerd[1443]: 2025-07-07 06:09:56.852 [INFO][6622] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:56.865101 containerd[1443]: 2025-07-07 06:09:56.860 [WARNING][6622] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" HandleID="k8s-pod-network.0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" Workload="localhost-k8s-coredns--668d6bf9bc--wtdw2-eth0" Jul 7 06:09:56.865101 containerd[1443]: 2025-07-07 06:09:56.860 [INFO][6622] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" HandleID="k8s-pod-network.0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" Workload="localhost-k8s-coredns--668d6bf9bc--wtdw2-eth0" Jul 7 06:09:56.865101 containerd[1443]: 2025-07-07 06:09:56.861 [INFO][6622] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:56.865101 containerd[1443]: 2025-07-07 06:09:56.863 [INFO][6613] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" Jul 7 06:09:56.865569 containerd[1443]: time="2025-07-07T06:09:56.865132807Z" level=info msg="TearDown network for sandbox \"0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1\" successfully" Jul 7 06:09:56.865569 containerd[1443]: time="2025-07-07T06:09:56.865157930Z" level=info msg="StopPodSandbox for \"0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1\" returns successfully" Jul 7 06:09:56.865649 containerd[1443]: time="2025-07-07T06:09:56.865598543Z" level=info msg="RemovePodSandbox for \"0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1\"" Jul 7 06:09:56.865649 containerd[1443]: time="2025-07-07T06:09:56.865626066Z" level=info msg="Forcibly stopping sandbox \"0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1\"" Jul 7 06:09:56.934491 containerd[1443]: 2025-07-07 06:09:56.897 [WARNING][6640] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--wtdw2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"acb26600-e422-4fa9-86c9-1e99272ac907", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 9, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4139863f28ec524ddc8f268f2ad68bbdaa38d96ef3bfa144f85694445f3178a1", Pod:"coredns-668d6bf9bc-wtdw2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidb351f2839a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:09:56.934491 containerd[1443]: 2025-07-07 06:09:56.897 [INFO][6640] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" Jul 7 06:09:56.934491 containerd[1443]: 2025-07-07 06:09:56.897 [INFO][6640] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" iface="eth0" netns="" Jul 7 06:09:56.934491 containerd[1443]: 2025-07-07 06:09:56.897 [INFO][6640] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" Jul 7 06:09:56.934491 containerd[1443]: 2025-07-07 06:09:56.897 [INFO][6640] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" Jul 7 06:09:56.934491 containerd[1443]: 2025-07-07 06:09:56.915 [INFO][6649] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" HandleID="k8s-pod-network.0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" Workload="localhost-k8s-coredns--668d6bf9bc--wtdw2-eth0" Jul 7 06:09:56.934491 containerd[1443]: 2025-07-07 06:09:56.915 [INFO][6649] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:56.934491 containerd[1443]: 2025-07-07 06:09:56.915 [INFO][6649] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:56.934491 containerd[1443]: 2025-07-07 06:09:56.923 [WARNING][6649] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" HandleID="k8s-pod-network.0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" Workload="localhost-k8s-coredns--668d6bf9bc--wtdw2-eth0" Jul 7 06:09:56.934491 containerd[1443]: 2025-07-07 06:09:56.923 [INFO][6649] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" HandleID="k8s-pod-network.0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" Workload="localhost-k8s-coredns--668d6bf9bc--wtdw2-eth0" Jul 7 06:09:56.934491 containerd[1443]: 2025-07-07 06:09:56.928 [INFO][6649] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:56.934491 containerd[1443]: 2025-07-07 06:09:56.931 [INFO][6640] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1" Jul 7 06:09:56.935077 containerd[1443]: time="2025-07-07T06:09:56.934520345Z" level=info msg="TearDown network for sandbox \"0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1\" successfully" Jul 7 06:09:56.938309 containerd[1443]: time="2025-07-07T06:09:56.938229550Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:09:56.938309 containerd[1443]: time="2025-07-07T06:09:56.938297639Z" level=info msg="RemovePodSandbox \"0fd8f490979a5a83cd415e328c4f6466168eb8252dc438f017c40e7d629bf3a1\" returns successfully" Jul 7 06:09:56.938783 containerd[1443]: time="2025-07-07T06:09:56.938747293Z" level=info msg="StopPodSandbox for \"26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f\"" Jul 7 06:09:56.946895 systemd[1]: Started sshd@17-10.0.0.114:22-10.0.0.1:36126.service - OpenSSH per-connection server daemon (10.0.0.1:36126). Jul 7 06:09:56.994394 sshd[6674]: Accepted publickey for core from 10.0.0.1 port 36126 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:09:56.997497 sshd[6674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:09:57.002311 systemd-logind[1425]: New session 18 of user core. Jul 7 06:09:57.009059 containerd[1443]: 2025-07-07 06:09:56.972 [WARNING][6669] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" WorkloadEndpoint="localhost-k8s-whisker--54c9c5d6b7--8slwf-eth0" Jul 7 06:09:57.009059 containerd[1443]: 2025-07-07 06:09:56.972 [INFO][6669] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" Jul 7 06:09:57.009059 containerd[1443]: 2025-07-07 06:09:56.972 [INFO][6669] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" iface="eth0" netns="" Jul 7 06:09:57.009059 containerd[1443]: 2025-07-07 06:09:56.972 [INFO][6669] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" Jul 7 06:09:57.009059 containerd[1443]: 2025-07-07 06:09:56.972 [INFO][6669] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" Jul 7 06:09:57.009059 containerd[1443]: 2025-07-07 06:09:56.993 [INFO][6680] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" HandleID="k8s-pod-network.26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" Workload="localhost-k8s-whisker--54c9c5d6b7--8slwf-eth0" Jul 7 06:09:57.009059 containerd[1443]: 2025-07-07 06:09:56.993 [INFO][6680] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:57.009059 containerd[1443]: 2025-07-07 06:09:56.993 [INFO][6680] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:57.009059 containerd[1443]: 2025-07-07 06:09:57.003 [WARNING][6680] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" HandleID="k8s-pod-network.26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" Workload="localhost-k8s-whisker--54c9c5d6b7--8slwf-eth0" Jul 7 06:09:57.009059 containerd[1443]: 2025-07-07 06:09:57.003 [INFO][6680] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" HandleID="k8s-pod-network.26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" Workload="localhost-k8s-whisker--54c9c5d6b7--8slwf-eth0" Jul 7 06:09:57.009059 containerd[1443]: 2025-07-07 06:09:57.005 [INFO][6680] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:57.009059 containerd[1443]: 2025-07-07 06:09:57.007 [INFO][6669] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" Jul 7 06:09:57.009701 containerd[1443]: time="2025-07-07T06:09:57.009067814Z" level=info msg="TearDown network for sandbox \"26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f\" successfully" Jul 7 06:09:57.009701 containerd[1443]: time="2025-07-07T06:09:57.009091857Z" level=info msg="StopPodSandbox for \"26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f\" returns successfully" Jul 7 06:09:57.009701 containerd[1443]: time="2025-07-07T06:09:57.009537670Z" level=info msg="RemovePodSandbox for \"26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f\"" Jul 7 06:09:57.009701 containerd[1443]: time="2025-07-07T06:09:57.009571794Z" level=info msg="Forcibly stopping sandbox \"26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f\"" Jul 7 06:09:57.009140 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 06:09:57.084240 containerd[1443]: 2025-07-07 06:09:57.047 [WARNING][6699] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" WorkloadEndpoint="localhost-k8s-whisker--54c9c5d6b7--8slwf-eth0" Jul 7 06:09:57.084240 containerd[1443]: 2025-07-07 06:09:57.047 [INFO][6699] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" Jul 7 06:09:57.084240 containerd[1443]: 2025-07-07 06:09:57.047 [INFO][6699] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" iface="eth0" netns="" Jul 7 06:09:57.084240 containerd[1443]: 2025-07-07 06:09:57.047 [INFO][6699] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" Jul 7 06:09:57.084240 containerd[1443]: 2025-07-07 06:09:57.047 [INFO][6699] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" Jul 7 06:09:57.084240 containerd[1443]: 2025-07-07 06:09:57.070 [INFO][6708] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" HandleID="k8s-pod-network.26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" Workload="localhost-k8s-whisker--54c9c5d6b7--8slwf-eth0" Jul 7 06:09:57.084240 containerd[1443]: 2025-07-07 06:09:57.070 [INFO][6708] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:09:57.084240 containerd[1443]: 2025-07-07 06:09:57.070 [INFO][6708] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:09:57.084240 containerd[1443]: 2025-07-07 06:09:57.079 [WARNING][6708] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" HandleID="k8s-pod-network.26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" Workload="localhost-k8s-whisker--54c9c5d6b7--8slwf-eth0" Jul 7 06:09:57.084240 containerd[1443]: 2025-07-07 06:09:57.079 [INFO][6708] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" HandleID="k8s-pod-network.26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" Workload="localhost-k8s-whisker--54c9c5d6b7--8slwf-eth0" Jul 7 06:09:57.084240 containerd[1443]: 2025-07-07 06:09:57.080 [INFO][6708] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:09:57.084240 containerd[1443]: 2025-07-07 06:09:57.082 [INFO][6699] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f" Jul 7 06:09:57.084240 containerd[1443]: time="2025-07-07T06:09:57.084182880Z" level=info msg="TearDown network for sandbox \"26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f\" successfully" Jul 7 06:09:57.086994 containerd[1443]: time="2025-07-07T06:09:57.086946729Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:09:57.087043 containerd[1443]: time="2025-07-07T06:09:57.087022778Z" level=info msg="RemovePodSandbox \"26838dc1c59bdc1a4af99ccbe2ba5816b491954951e6ed79525a30d56fa7307f\" returns successfully" Jul 7 06:09:57.214522 sshd[6674]: pam_unix(sshd:session): session closed for user core Jul 7 06:09:57.218098 systemd-logind[1425]: Session 18 logged out. Waiting for processes to exit. Jul 7 06:09:57.218394 systemd[1]: sshd@17-10.0.0.114:22-10.0.0.1:36126.service: Deactivated successfully. Jul 7 06:09:57.220652 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 06:09:57.221394 systemd-logind[1425]: Removed session 18. Jul 7 06:09:57.869041 systemd[1]: run-containerd-runc-k8s.io-f6305fe0dbf7d5fa89ff6fb7053f9a6426cd269c98eb61591d4191004a98cb32-runc.8NPXLw.mount: Deactivated successfully. Jul 7 06:10:02.232070 systemd[1]: Started sshd@18-10.0.0.114:22-10.0.0.1:36142.service - OpenSSH per-connection server daemon (10.0.0.1:36142). Jul 7 06:10:02.273841 sshd[6750]: Accepted publickey for core from 10.0.0.1 port 36142 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:10:02.275362 sshd[6750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:10:02.280044 systemd-logind[1425]: New session 19 of user core. Jul 7 06:10:02.296181 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 06:10:02.451866 sshd[6750]: pam_unix(sshd:session): session closed for user core Jul 7 06:10:02.455522 systemd[1]: sshd@18-10.0.0.114:22-10.0.0.1:36142.service: Deactivated successfully. Jul 7 06:10:02.457535 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 06:10:02.458248 systemd-logind[1425]: Session 19 logged out. Waiting for processes to exit. Jul 7 06:10:02.459473 systemd-logind[1425]: Removed session 19. Jul 7 06:10:07.467092 systemd[1]: Started sshd@19-10.0.0.114:22-10.0.0.1:43970.service - OpenSSH per-connection server daemon (10.0.0.1:43970). Jul 7 06:10:07.507531 sshd[6766]: Accepted publickey for core from 10.0.0.1 port 43970 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:10:07.508856 sshd[6766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:10:07.514651 systemd-logind[1425]: New session 20 of user core. Jul 7 06:10:07.521196 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 06:10:07.688548 sshd[6766]: pam_unix(sshd:session): session closed for user core Jul 7 06:10:07.692284 systemd[1]: sshd@19-10.0.0.114:22-10.0.0.1:43970.service: Deactivated successfully. Jul 7 06:10:07.694597 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 06:10:07.695527 systemd-logind[1425]: Session 20 logged out. Waiting for processes to exit. Jul 7 06:10:07.696646 systemd-logind[1425]: Removed session 20.