Jul 11 00:25:23.899530 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 11 00:25:23.899550 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Jul 10 22:41:52 -00 2025 Jul 11 00:25:23.899560 kernel: KASLR enabled Jul 11 00:25:23.899566 kernel: efi: EFI v2.7 by EDK II Jul 11 00:25:23.899571 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 11 00:25:23.899577 kernel: random: crng init done Jul 11 00:25:23.899584 kernel: ACPI: Early table checksum verification disabled Jul 11 00:25:23.899589 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 11 00:25:23.899596 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 11 00:25:23.899603 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:25:23.899609 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:25:23.899615 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:25:23.899621 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:25:23.899627 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:25:23.899634 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:25:23.899641 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:25:23.899648 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:25:23.899654 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:25:23.899660 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 11 00:25:23.899666 kernel: NUMA: Failed to initialise from firmware Jul 11 00:25:23.899672 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:25:23.899679 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Jul 11 00:25:23.899685 kernel: Zone ranges: Jul 11 00:25:23.899691 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:25:23.899697 kernel: DMA32 empty Jul 11 00:25:23.899704 kernel: Normal empty Jul 11 00:25:23.899710 kernel: Movable zone start for each node Jul 11 00:25:23.899717 kernel: Early memory node ranges Jul 11 00:25:23.899723 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 11 00:25:23.899729 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 11 00:25:23.899736 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 11 00:25:23.899742 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 11 00:25:23.899748 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 11 00:25:23.899754 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 11 00:25:23.899760 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 11 00:25:23.899766 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:25:23.899772 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 11 00:25:23.899780 kernel: psci: probing for conduit method from ACPI. Jul 11 00:25:23.899786 kernel: psci: PSCIv1.1 detected in firmware. Jul 11 00:25:23.899792 kernel: psci: Using standard PSCI v0.2 function IDs Jul 11 00:25:23.899801 kernel: psci: Trusted OS migration not required Jul 11 00:25:23.899807 kernel: psci: SMC Calling Convention v1.1 Jul 11 00:25:23.899814 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 11 00:25:23.899822 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 11 00:25:23.899829 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 11 00:25:23.899836 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 11 00:25:23.899843 kernel: Detected PIPT I-cache on CPU0 Jul 11 00:25:23.899849 kernel: CPU features: detected: GIC system register CPU interface Jul 11 00:25:23.899856 kernel: CPU features: detected: Hardware dirty bit management Jul 11 00:25:23.899862 kernel: CPU features: detected: Spectre-v4 Jul 11 00:25:23.899869 kernel: CPU features: detected: Spectre-BHB Jul 11 00:25:23.899876 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 11 00:25:23.899882 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 11 00:25:23.899890 kernel: CPU features: detected: ARM erratum 1418040 Jul 11 00:25:23.899897 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 11 00:25:23.899903 kernel: alternatives: applying boot alternatives Jul 11 00:25:23.899911 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1479f76954ab5eb3c0ce800eb2a80ad04b273ff773a5af5c1fe82fb8feef2990 Jul 11 00:25:23.899918 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 11 00:25:23.899924 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 11 00:25:23.899931 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 11 00:25:23.899938 kernel: Fallback order for Node 0: 0 Jul 11 00:25:23.899944 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 11 00:25:23.899951 kernel: Policy zone: DMA Jul 11 00:25:23.899957 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 11 00:25:23.899965 kernel: software IO TLB: area num 4. Jul 11 00:25:23.899972 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 11 00:25:23.899979 kernel: Memory: 2386400K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185888K reserved, 0K cma-reserved) Jul 11 00:25:23.899986 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 11 00:25:23.899992 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 11 00:25:23.899999 kernel: rcu: RCU event tracing is enabled. Jul 11 00:25:23.900006 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 11 00:25:23.900013 kernel: Trampoline variant of Tasks RCU enabled. Jul 11 00:25:23.900019 kernel: Tracing variant of Tasks RCU enabled. Jul 11 00:25:23.900026 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 11 00:25:23.900033 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 11 00:25:23.900039 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 11 00:25:23.900047 kernel: GICv3: 256 SPIs implemented Jul 11 00:25:23.900054 kernel: GICv3: 0 Extended SPIs implemented Jul 11 00:25:23.900060 kernel: Root IRQ handler: gic_handle_irq Jul 11 00:25:23.900067 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 11 00:25:23.900073 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 11 00:25:23.900080 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 11 00:25:23.900087 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 11 00:25:23.900094 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 11 00:25:23.900100 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 11 00:25:23.900107 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 11 00:25:23.900114 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 11 00:25:23.900122 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:25:23.900128 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 11 00:25:23.900135 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 11 00:25:23.900142 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 11 00:25:23.900149 kernel: arm-pv: using stolen time PV Jul 11 00:25:23.900156 kernel: Console: colour dummy device 80x25 Jul 11 00:25:23.900163 kernel: ACPI: Core revision 20230628 Jul 11 00:25:23.900170 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 11 00:25:23.900177 kernel: pid_max: default: 32768 minimum: 301 Jul 11 00:25:23.900183 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 11 00:25:23.900203 kernel: landlock: Up and running. Jul 11 00:25:23.900210 kernel: SELinux: Initializing. Jul 11 00:25:23.900216 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:25:23.900223 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:25:23.900230 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:25:23.900237 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:25:23.900244 kernel: rcu: Hierarchical SRCU implementation. Jul 11 00:25:23.900251 kernel: rcu: Max phase no-delay instances is 400. Jul 11 00:25:23.900258 kernel: Platform MSI: ITS@0x8080000 domain created Jul 11 00:25:23.900266 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 11 00:25:23.900272 kernel: Remapping and enabling EFI services. Jul 11 00:25:23.900279 kernel: smp: Bringing up secondary CPUs ... Jul 11 00:25:23.900286 kernel: Detected PIPT I-cache on CPU1 Jul 11 00:25:23.900293 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 11 00:25:23.900300 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 11 00:25:23.900307 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:25:23.900321 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 11 00:25:23.900328 kernel: Detected PIPT I-cache on CPU2 Jul 11 00:25:23.900335 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 11 00:25:23.900344 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 11 00:25:23.900351 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:25:23.900362 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 11 00:25:23.900370 kernel: Detected PIPT I-cache on CPU3 Jul 11 00:25:23.900377 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 11 00:25:23.900384 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 11 00:25:23.900392 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:25:23.900398 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 11 00:25:23.900406 kernel: smp: Brought up 1 node, 4 CPUs Jul 11 00:25:23.900414 kernel: SMP: Total of 4 processors activated. Jul 11 00:25:23.900421 kernel: CPU features: detected: 32-bit EL0 Support Jul 11 00:25:23.900428 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 11 00:25:23.900436 kernel: CPU features: detected: Common not Private translations Jul 11 00:25:23.900443 kernel: CPU features: detected: CRC32 instructions Jul 11 00:25:23.900450 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 11 00:25:23.900457 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 11 00:25:23.900464 kernel: CPU features: detected: LSE atomic instructions Jul 11 00:25:23.900473 kernel: CPU features: detected: Privileged Access Never Jul 11 00:25:23.900480 kernel: CPU features: detected: RAS Extension Support Jul 11 00:25:23.900487 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 11 00:25:23.900494 kernel: CPU: All CPU(s) started at EL1 Jul 11 00:25:23.900501 kernel: alternatives: applying system-wide alternatives Jul 11 00:25:23.900508 kernel: devtmpfs: initialized Jul 11 00:25:23.900516 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 11 00:25:23.900523 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 11 00:25:23.900530 kernel: pinctrl core: initialized pinctrl subsystem Jul 11 00:25:23.900538 kernel: SMBIOS 3.0.0 present. Jul 11 00:25:23.900546 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 11 00:25:23.900553 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 11 00:25:23.900560 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 11 00:25:23.900567 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 11 00:25:23.900574 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 11 00:25:23.900582 kernel: audit: initializing netlink subsys (disabled) Jul 11 00:25:23.900589 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Jul 11 00:25:23.900596 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 11 00:25:23.900604 kernel: cpuidle: using governor menu Jul 11 00:25:23.900612 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 11 00:25:23.900619 kernel: ASID allocator initialised with 32768 entries Jul 11 00:25:23.900626 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 11 00:25:23.900633 kernel: Serial: AMBA PL011 UART driver Jul 11 00:25:23.900640 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 11 00:25:23.900648 kernel: Modules: 0 pages in range for non-PLT usage Jul 11 00:25:23.900655 kernel: Modules: 509008 pages in range for PLT usage Jul 11 00:25:23.900662 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 11 00:25:23.900670 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 11 00:25:23.900677 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 11 00:25:23.900685 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 11 00:25:23.900692 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 11 00:25:23.900699 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 11 00:25:23.900706 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 11 00:25:23.900713 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 11 00:25:23.900720 kernel: ACPI: Added _OSI(Module Device) Jul 11 00:25:23.900727 kernel: ACPI: Added _OSI(Processor Device) Jul 11 00:25:23.900735 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 11 00:25:23.900742 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 11 00:25:23.900749 kernel: ACPI: Interpreter enabled Jul 11 00:25:23.900757 kernel: ACPI: Using GIC for interrupt routing Jul 11 00:25:23.900764 kernel: ACPI: MCFG table detected, 1 entries Jul 11 00:25:23.900771 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 11 00:25:23.900778 kernel: printk: console [ttyAMA0] enabled Jul 11 00:25:23.900785 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 11 00:25:23.900912 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 11 00:25:23.900986 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 11 00:25:23.901049 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 11 00:25:23.901111 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 11 00:25:23.901173 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 11 00:25:23.901182 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 11 00:25:23.901212 kernel: PCI host bridge to bus 0000:00 Jul 11 00:25:23.901287 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 11 00:25:23.901362 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 11 00:25:23.901420 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 11 00:25:23.901477 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 11 00:25:23.901561 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 11 00:25:23.901635 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 11 00:25:23.901700 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 11 00:25:23.901767 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 11 00:25:23.901830 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 11 00:25:23.901893 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 11 00:25:23.901957 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 11 00:25:23.902020 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 11 00:25:23.902078 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 11 00:25:23.902133 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 11 00:25:23.902204 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 11 00:25:23.902215 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 11 00:25:23.902222 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 11 00:25:23.902229 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 11 00:25:23.902236 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 11 00:25:23.902243 kernel: iommu: Default domain type: Translated Jul 11 00:25:23.902250 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 11 00:25:23.902258 kernel: efivars: Registered efivars operations Jul 11 00:25:23.902265 kernel: vgaarb: loaded Jul 11 00:25:23.902274 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 11 00:25:23.902281 kernel: VFS: Disk quotas dquot_6.6.0 Jul 11 00:25:23.902289 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 11 00:25:23.902296 kernel: pnp: PnP ACPI init Jul 11 00:25:23.902375 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 11 00:25:23.902386 kernel: pnp: PnP ACPI: found 1 devices Jul 11 00:25:23.902393 kernel: NET: Registered PF_INET protocol family Jul 11 00:25:23.902401 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 11 00:25:23.902410 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 11 00:25:23.902418 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 11 00:25:23.902425 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 11 00:25:23.902432 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 11 00:25:23.902440 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 11 00:25:23.902447 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:25:23.902454 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:25:23.902461 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 11 00:25:23.902469 kernel: PCI: CLS 0 bytes, default 64 Jul 11 00:25:23.902477 kernel: kvm [1]: HYP mode not available Jul 11 00:25:23.902484 kernel: Initialise system trusted keyrings Jul 11 00:25:23.902491 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 11 00:25:23.902498 kernel: Key type asymmetric registered Jul 11 00:25:23.902506 kernel: Asymmetric key parser 'x509' registered Jul 11 00:25:23.902513 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 11 00:25:23.902520 kernel: io scheduler mq-deadline registered Jul 11 00:25:23.902527 kernel: io scheduler kyber registered Jul 11 00:25:23.902534 kernel: io scheduler bfq registered Jul 11 00:25:23.902543 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 11 00:25:23.902550 kernel: ACPI: button: Power Button [PWRB] Jul 11 00:25:23.902557 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 11 00:25:23.902623 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 11 00:25:23.902633 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 11 00:25:23.902641 kernel: thunder_xcv, ver 1.0 Jul 11 00:25:23.902648 kernel: thunder_bgx, ver 1.0 Jul 11 00:25:23.902655 kernel: nicpf, ver 1.0 Jul 11 00:25:23.902662 kernel: nicvf, ver 1.0 Jul 11 00:25:23.902736 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 11 00:25:23.902797 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-11T00:25:23 UTC (1752193523) Jul 11 00:25:23.902806 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 11 00:25:23.902814 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 11 00:25:23.902821 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 11 00:25:23.902828 kernel: watchdog: Hard watchdog permanently disabled Jul 11 00:25:23.902835 kernel: NET: Registered PF_INET6 protocol family Jul 11 00:25:23.902843 kernel: Segment Routing with IPv6 Jul 11 00:25:23.902852 kernel: In-situ OAM (IOAM) with IPv6 Jul 11 00:25:23.902859 kernel: NET: Registered PF_PACKET protocol family Jul 11 00:25:23.902866 kernel: Key type dns_resolver registered Jul 11 00:25:23.902873 kernel: registered taskstats version 1 Jul 11 00:25:23.902880 kernel: Loading compiled-in X.509 certificates Jul 11 00:25:23.902887 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: 9d58afa0c1753353480d5539f26f662c9ce000cb' Jul 11 00:25:23.902894 kernel: Key type .fscrypt registered Jul 11 00:25:23.902901 kernel: Key type fscrypt-provisioning registered Jul 11 00:25:23.902909 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 11 00:25:23.902917 kernel: ima: Allocated hash algorithm: sha1 Jul 11 00:25:23.902924 kernel: ima: No architecture policies found Jul 11 00:25:23.902932 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 11 00:25:23.902939 kernel: clk: Disabling unused clocks Jul 11 00:25:23.902946 kernel: Freeing unused kernel memory: 39424K Jul 11 00:25:23.902953 kernel: Run /init as init process Jul 11 00:25:23.902960 kernel: with arguments: Jul 11 00:25:23.902967 kernel: /init Jul 11 00:25:23.902974 kernel: with environment: Jul 11 00:25:23.902982 kernel: HOME=/ Jul 11 00:25:23.902989 kernel: TERM=linux Jul 11 00:25:23.902996 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 11 00:25:23.903005 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 11 00:25:23.903014 systemd[1]: Detected virtualization kvm. Jul 11 00:25:23.903022 systemd[1]: Detected architecture arm64. Jul 11 00:25:23.903029 systemd[1]: Running in initrd. Jul 11 00:25:23.903038 systemd[1]: No hostname configured, using default hostname. Jul 11 00:25:23.903045 systemd[1]: Hostname set to . Jul 11 00:25:23.903053 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:25:23.903061 systemd[1]: Queued start job for default target initrd.target. Jul 11 00:25:23.903068 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:25:23.903076 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:25:23.903084 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 11 00:25:23.903092 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:25:23.903101 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 11 00:25:23.903109 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 11 00:25:23.903118 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 11 00:25:23.903126 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 11 00:25:23.903134 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:25:23.903142 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:25:23.903149 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:25:23.903158 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:25:23.903166 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:25:23.903174 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:25:23.903181 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:25:23.903197 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:25:23.903205 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 11 00:25:23.903213 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 11 00:25:23.903221 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:25:23.903229 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:25:23.903238 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:25:23.903246 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:25:23.903253 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 11 00:25:23.903261 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:25:23.903269 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 11 00:25:23.903277 systemd[1]: Starting systemd-fsck-usr.service... Jul 11 00:25:23.903284 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:25:23.903292 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:25:23.903301 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:25:23.903309 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 11 00:25:23.903323 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:25:23.903331 systemd[1]: Finished systemd-fsck-usr.service. Jul 11 00:25:23.903339 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 00:25:23.903367 systemd-journald[238]: Collecting audit messages is disabled. Jul 11 00:25:23.903386 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:25:23.903394 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:25:23.903402 systemd-journald[238]: Journal started Jul 11 00:25:23.903422 systemd-journald[238]: Runtime Journal (/run/log/journal/157f5bffe28e4b5c8798b626e099994f) is 5.9M, max 47.3M, 41.4M free. Jul 11 00:25:23.894870 systemd-modules-load[239]: Inserted module 'overlay' Jul 11 00:25:23.907213 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:25:23.907239 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 11 00:25:23.910125 systemd-modules-load[239]: Inserted module 'br_netfilter' Jul 11 00:25:23.910983 kernel: Bridge firewalling registered Jul 11 00:25:23.910871 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:25:23.919353 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:25:23.920994 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:25:23.922988 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:25:23.925142 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:25:23.933473 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:25:23.934618 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:25:23.937977 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:25:23.940481 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:25:23.947360 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 11 00:25:23.949949 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:25:23.957344 dracut-cmdline[274]: dracut-dracut-053 Jul 11 00:25:23.959836 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1479f76954ab5eb3c0ce800eb2a80ad04b273ff773a5af5c1fe82fb8feef2990 Jul 11 00:25:23.977936 systemd-resolved[278]: Positive Trust Anchors: Jul 11 00:25:23.977954 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:25:23.977986 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:25:23.982905 systemd-resolved[278]: Defaulting to hostname 'linux'. Jul 11 00:25:23.983967 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:25:23.988349 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:25:24.027220 kernel: SCSI subsystem initialized Jul 11 00:25:24.033216 kernel: Loading iSCSI transport class v2.0-870. Jul 11 00:25:24.039237 kernel: iscsi: registered transport (tcp) Jul 11 00:25:24.052202 kernel: iscsi: registered transport (qla4xxx) Jul 11 00:25:24.052224 kernel: QLogic iSCSI HBA Driver Jul 11 00:25:24.093780 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 11 00:25:24.103391 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 11 00:25:24.121768 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 11 00:25:24.121810 kernel: device-mapper: uevent: version 1.0.3 Jul 11 00:25:24.123358 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 11 00:25:24.172224 kernel: raid6: neonx8 gen() 15792 MB/s Jul 11 00:25:24.189224 kernel: raid6: neonx4 gen() 15673 MB/s Jul 11 00:25:24.206222 kernel: raid6: neonx2 gen() 13265 MB/s Jul 11 00:25:24.223219 kernel: raid6: neonx1 gen() 10453 MB/s Jul 11 00:25:24.240220 kernel: raid6: int64x8 gen() 6947 MB/s Jul 11 00:25:24.257217 kernel: raid6: int64x4 gen() 7340 MB/s Jul 11 00:25:24.274216 kernel: raid6: int64x2 gen() 6125 MB/s Jul 11 00:25:24.291291 kernel: raid6: int64x1 gen() 5052 MB/s Jul 11 00:25:24.291327 kernel: raid6: using algorithm neonx8 gen() 15792 MB/s Jul 11 00:25:24.309268 kernel: raid6: .... xor() 11923 MB/s, rmw enabled Jul 11 00:25:24.309281 kernel: raid6: using neon recovery algorithm Jul 11 00:25:24.314694 kernel: xor: measuring software checksum speed Jul 11 00:25:24.314707 kernel: 8regs : 19783 MB/sec Jul 11 00:25:24.315368 kernel: 32regs : 19660 MB/sec Jul 11 00:25:24.316605 kernel: arm64_neon : 27052 MB/sec Jul 11 00:25:24.316628 kernel: xor: using function: arm64_neon (27052 MB/sec) Jul 11 00:25:24.369215 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 11 00:25:24.380374 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:25:24.389350 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:25:24.400777 systemd-udevd[460]: Using default interface naming scheme 'v255'. Jul 11 00:25:24.403861 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:25:24.410334 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 11 00:25:24.421498 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Jul 11 00:25:24.446622 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:25:24.462383 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:25:24.499822 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:25:24.509388 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 11 00:25:24.519414 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 11 00:25:24.521545 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:25:24.523326 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:25:24.526629 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:25:24.537458 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 11 00:25:24.546912 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 11 00:25:24.547065 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 11 00:25:24.550383 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:25:24.558074 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 11 00:25:24.558101 kernel: GPT:9289727 != 19775487 Jul 11 00:25:24.560257 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:25:24.560397 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:25:24.564536 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 11 00:25:24.564555 kernel: GPT:9289727 != 19775487 Jul 11 00:25:24.564482 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:25:24.567840 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 11 00:25:24.567864 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:25:24.566681 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:25:24.566809 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:25:24.568868 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:25:24.575408 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:25:24.587167 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (520) Jul 11 00:25:24.587959 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 11 00:25:24.591365 kernel: BTRFS: device fsid f5d5cad7-cb7a-4b07-bec7-847b84711ad7 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (510) Jul 11 00:25:24.594211 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:25:24.601751 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 11 00:25:24.606277 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:25:24.610130 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 11 00:25:24.611322 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 11 00:25:24.626385 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 11 00:25:24.628097 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:25:24.634028 disk-uuid[551]: Primary Header is updated. Jul 11 00:25:24.634028 disk-uuid[551]: Secondary Entries is updated. Jul 11 00:25:24.634028 disk-uuid[551]: Secondary Header is updated. Jul 11 00:25:24.639290 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:25:24.651165 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:25:25.649048 disk-uuid[552]: The operation has completed successfully. Jul 11 00:25:25.650160 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:25:25.670050 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 11 00:25:25.670148 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 11 00:25:25.700331 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 11 00:25:25.703051 sh[574]: Success Jul 11 00:25:25.718206 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 11 00:25:25.753564 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 11 00:25:25.755311 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 11 00:25:25.756350 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 11 00:25:25.765978 kernel: BTRFS info (device dm-0): first mount of filesystem f5d5cad7-cb7a-4b07-bec7-847b84711ad7 Jul 11 00:25:25.766034 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:25:25.766054 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 11 00:25:25.767819 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 11 00:25:25.767835 kernel: BTRFS info (device dm-0): using free space tree Jul 11 00:25:25.771595 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 11 00:25:25.772856 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 11 00:25:25.784384 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 11 00:25:25.786366 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 11 00:25:25.793246 kernel: BTRFS info (device vda6): first mount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:25:25.793279 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:25:25.793289 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:25:25.796204 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:25:25.803403 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 11 00:25:25.805209 kernel: BTRFS info (device vda6): last unmount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:25:25.810602 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 11 00:25:25.816344 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 11 00:25:25.889250 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:25:25.899356 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:25:25.909932 ignition[669]: Ignition 2.19.0 Jul 11 00:25:25.909942 ignition[669]: Stage: fetch-offline Jul 11 00:25:25.909974 ignition[669]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:25:25.909982 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:25:25.910130 ignition[669]: parsed url from cmdline: "" Jul 11 00:25:25.910134 ignition[669]: no config URL provided Jul 11 00:25:25.910138 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 00:25:25.910145 ignition[669]: no config at "/usr/lib/ignition/user.ign" Jul 11 00:25:25.910165 ignition[669]: op(1): [started] loading QEMU firmware config module Jul 11 00:25:25.910169 ignition[669]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 11 00:25:25.920420 ignition[669]: op(1): [finished] loading QEMU firmware config module Jul 11 00:25:25.922765 systemd-networkd[764]: lo: Link UP Jul 11 00:25:25.922780 systemd-networkd[764]: lo: Gained carrier Jul 11 00:25:25.923447 systemd-networkd[764]: Enumeration completed Jul 11 00:25:25.923851 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:25:25.923875 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:25:25.923879 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:25:25.924751 systemd-networkd[764]: eth0: Link UP Jul 11 00:25:25.924754 systemd-networkd[764]: eth0: Gained carrier Jul 11 00:25:25.924761 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:25:25.925880 systemd[1]: Reached target network.target - Network. Jul 11 00:25:25.950225 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.118/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:25:25.970287 ignition[669]: parsing config with SHA512: ac184ba04412ed13b407ff96235d282f7bfda44c7b5b5c2a661cb8c36cca71f6331b743b95c119d509b65761fd248bc64522c09439d0b525d196f73249a00fdb Jul 11 00:25:25.975655 unknown[669]: fetched base config from "system" Jul 11 00:25:25.975665 unknown[669]: fetched user config from "qemu" Jul 11 00:25:25.976105 ignition[669]: fetch-offline: fetch-offline passed Jul 11 00:25:25.977594 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:25:25.976167 ignition[669]: Ignition finished successfully Jul 11 00:25:25.979535 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 11 00:25:25.985344 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 11 00:25:25.995140 ignition[771]: Ignition 2.19.0 Jul 11 00:25:25.995149 ignition[771]: Stage: kargs Jul 11 00:25:25.995376 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:25:25.995386 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:25:25.998366 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 11 00:25:25.996281 ignition[771]: kargs: kargs passed Jul 11 00:25:25.996330 ignition[771]: Ignition finished successfully Jul 11 00:25:26.012392 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 11 00:25:26.021650 ignition[779]: Ignition 2.19.0 Jul 11 00:25:26.021657 ignition[779]: Stage: disks Jul 11 00:25:26.021810 ignition[779]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:25:26.021819 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:25:26.022721 ignition[779]: disks: disks passed Jul 11 00:25:26.025263 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 11 00:25:26.022763 ignition[779]: Ignition finished successfully Jul 11 00:25:26.026532 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 11 00:25:26.028175 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 11 00:25:26.029946 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:25:26.031836 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:25:26.033768 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:25:26.043366 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 11 00:25:26.052114 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 11 00:25:26.056568 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 11 00:25:26.058784 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 11 00:25:26.105212 kernel: EXT4-fs (vda9): mounted filesystem a2a437d1-0a8e-46b9-88bf-4a47ff29fe90 r/w with ordered data mode. Quota mode: none. Jul 11 00:25:26.105247 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 11 00:25:26.106441 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 11 00:25:26.115311 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:25:26.116987 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 11 00:25:26.118443 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 11 00:25:26.118481 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 11 00:25:26.128105 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (797) Jul 11 00:25:26.128128 kernel: BTRFS info (device vda6): first mount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:25:26.128144 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:25:26.128155 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:25:26.118502 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:25:26.122659 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 11 00:25:26.132063 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:25:26.126264 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 11 00:25:26.132549 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:25:26.165007 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Jul 11 00:25:26.168249 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Jul 11 00:25:26.173195 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Jul 11 00:25:26.176565 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Jul 11 00:25:26.241269 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 11 00:25:26.248273 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 11 00:25:26.250590 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 11 00:25:26.255203 kernel: BTRFS info (device vda6): last unmount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:25:26.269132 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 11 00:25:26.272362 ignition[913]: INFO : Ignition 2.19.0 Jul 11 00:25:26.272362 ignition[913]: INFO : Stage: mount Jul 11 00:25:26.273853 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:25:26.273853 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:25:26.273853 ignition[913]: INFO : mount: mount passed Jul 11 00:25:26.273853 ignition[913]: INFO : Ignition finished successfully Jul 11 00:25:26.275864 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 11 00:25:26.287294 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 11 00:25:26.764970 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 11 00:25:26.775385 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:25:26.781933 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (926) Jul 11 00:25:26.781966 kernel: BTRFS info (device vda6): first mount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:25:26.782910 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:25:26.782926 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:25:26.786199 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:25:26.787020 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:25:26.807824 ignition[943]: INFO : Ignition 2.19.0 Jul 11 00:25:26.807824 ignition[943]: INFO : Stage: files Jul 11 00:25:26.809525 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:25:26.809525 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:25:26.809525 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Jul 11 00:25:26.812940 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 11 00:25:26.812940 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 11 00:25:26.812940 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 11 00:25:26.812940 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 11 00:25:26.812940 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 11 00:25:26.812940 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 11 00:25:26.811996 unknown[943]: wrote ssh authorized keys file for user: core Jul 11 00:25:26.821909 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 11 00:25:26.821909 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 11 00:25:26.821909 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 11 00:25:26.962648 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 11 00:25:27.105891 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 11 00:25:27.105891 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 11 00:25:27.110436 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 11 00:25:27.110436 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:25:27.110436 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:25:27.110436 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:25:27.110436 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:25:27.110436 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:25:27.110436 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:25:27.110436 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:25:27.110436 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:25:27.110436 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 11 00:25:27.110436 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 11 00:25:27.110436 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 11 00:25:27.110436 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 11 00:25:27.229318 systemd-networkd[764]: eth0: Gained IPv6LL Jul 11 00:25:27.618559 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 11 00:25:28.114931 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 11 00:25:28.114931 ignition[943]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 11 00:25:28.118588 ignition[943]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 11 00:25:28.118588 ignition[943]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 11 00:25:28.118588 ignition[943]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 11 00:25:28.118588 ignition[943]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 11 00:25:28.118588 ignition[943]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:25:28.118588 ignition[943]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:25:28.118588 ignition[943]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 11 00:25:28.118588 ignition[943]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jul 11 00:25:28.118588 ignition[943]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:25:28.118588 ignition[943]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:25:28.118588 ignition[943]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jul 11 00:25:28.118588 ignition[943]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jul 11 00:25:28.138799 ignition[943]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:25:28.142165 ignition[943]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:25:28.143673 ignition[943]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jul 11 00:25:28.143673 ignition[943]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jul 11 00:25:28.143673 ignition[943]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jul 11 00:25:28.143673 ignition[943]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:25:28.143673 ignition[943]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:25:28.143673 ignition[943]: INFO : files: files passed Jul 11 00:25:28.143673 ignition[943]: INFO : Ignition finished successfully Jul 11 00:25:28.145718 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 11 00:25:28.167326 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 11 00:25:28.169073 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 11 00:25:28.171307 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 11 00:25:28.172962 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 11 00:25:28.176272 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Jul 11 00:25:28.177849 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:25:28.177849 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:25:28.180769 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:25:28.180439 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:25:28.182012 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 11 00:25:28.189304 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 11 00:25:28.206477 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 11 00:25:28.206571 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 11 00:25:28.208628 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 11 00:25:28.210391 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 11 00:25:28.212141 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 11 00:25:28.212797 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 11 00:25:28.227617 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:25:28.229887 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 11 00:25:28.240274 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:25:28.241437 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:25:28.243442 systemd[1]: Stopped target timers.target - Timer Units. Jul 11 00:25:28.245123 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 11 00:25:28.245258 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:25:28.247708 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 11 00:25:28.249654 systemd[1]: Stopped target basic.target - Basic System. Jul 11 00:25:28.251238 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 11 00:25:28.252911 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:25:28.254800 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 11 00:25:28.256711 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 11 00:25:28.258489 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:25:28.260395 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 11 00:25:28.262261 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 11 00:25:28.263964 systemd[1]: Stopped target swap.target - Swaps. Jul 11 00:25:28.265449 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 11 00:25:28.265569 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:25:28.267839 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:25:28.269823 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:25:28.271685 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 11 00:25:28.276231 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:25:28.277477 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 11 00:25:28.277594 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 11 00:25:28.280280 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 11 00:25:28.280404 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:25:28.282385 systemd[1]: Stopped target paths.target - Path Units. Jul 11 00:25:28.283905 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 11 00:25:28.290215 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:25:28.291463 systemd[1]: Stopped target slices.target - Slice Units. Jul 11 00:25:28.293482 systemd[1]: Stopped target sockets.target - Socket Units. Jul 11 00:25:28.294999 systemd[1]: iscsid.socket: Deactivated successfully. Jul 11 00:25:28.295084 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:25:28.296605 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 11 00:25:28.296691 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:25:28.298178 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 11 00:25:28.298311 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:25:28.300037 systemd[1]: ignition-files.service: Deactivated successfully. Jul 11 00:25:28.300135 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 11 00:25:28.313406 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 11 00:25:28.315729 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 11 00:25:28.316561 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 11 00:25:28.316681 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:25:28.318532 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 11 00:25:28.318630 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:25:28.323487 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 11 00:25:28.323565 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 11 00:25:28.327938 ignition[997]: INFO : Ignition 2.19.0 Jul 11 00:25:28.328848 ignition[997]: INFO : Stage: umount Jul 11 00:25:28.328848 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:25:28.328848 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:25:28.331827 ignition[997]: INFO : umount: umount passed Jul 11 00:25:28.331827 ignition[997]: INFO : Ignition finished successfully Jul 11 00:25:28.330290 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 11 00:25:28.332014 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 11 00:25:28.332101 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 11 00:25:28.334490 systemd[1]: Stopped target network.target - Network. Jul 11 00:25:28.335375 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 11 00:25:28.335439 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 11 00:25:28.337052 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 11 00:25:28.337098 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 11 00:25:28.338830 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 11 00:25:28.338870 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 11 00:25:28.340527 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 11 00:25:28.340571 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 11 00:25:28.342404 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 11 00:25:28.345927 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 11 00:25:28.352950 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 11 00:25:28.353063 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 11 00:25:28.354294 systemd-networkd[764]: eth0: DHCPv6 lease lost Jul 11 00:25:28.355963 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 11 00:25:28.356014 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:25:28.358240 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 11 00:25:28.359317 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 11 00:25:28.360560 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 11 00:25:28.360594 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:25:28.367305 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 11 00:25:28.368996 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 11 00:25:28.369056 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:25:28.370950 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:25:28.370996 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:25:28.372782 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 11 00:25:28.372827 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 11 00:25:28.375019 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:25:28.384765 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 11 00:25:28.385835 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 11 00:25:28.393284 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 11 00:25:28.393396 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 11 00:25:28.395285 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 11 00:25:28.395407 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:25:28.397671 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 11 00:25:28.397726 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 11 00:25:28.398795 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 11 00:25:28.398829 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:25:28.400771 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 11 00:25:28.400817 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:25:28.403386 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 11 00:25:28.403429 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 11 00:25:28.406155 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:25:28.406215 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:25:28.409040 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 11 00:25:28.409087 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 11 00:25:28.416398 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 11 00:25:28.418405 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 11 00:25:28.418468 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:25:28.420505 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 11 00:25:28.420547 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:25:28.422650 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 11 00:25:28.422692 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:25:28.424683 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:25:28.424726 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:25:28.427307 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 11 00:25:28.427407 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 11 00:25:28.429174 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 11 00:25:28.431443 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 11 00:25:28.439761 systemd[1]: Switching root. Jul 11 00:25:28.463109 systemd-journald[238]: Journal stopped Jul 11 00:25:29.172359 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jul 11 00:25:29.172414 kernel: SELinux: policy capability network_peer_controls=1 Jul 11 00:25:29.172426 kernel: SELinux: policy capability open_perms=1 Jul 11 00:25:29.172435 kernel: SELinux: policy capability extended_socket_class=1 Jul 11 00:25:29.172445 kernel: SELinux: policy capability always_check_network=0 Jul 11 00:25:29.172458 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 11 00:25:29.172468 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 11 00:25:29.172477 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 11 00:25:29.172486 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 11 00:25:29.172495 kernel: audit: type=1403 audit(1752193528.639:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 11 00:25:29.172507 systemd[1]: Successfully loaded SELinux policy in 31.678ms. Jul 11 00:25:29.172523 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.904ms. Jul 11 00:25:29.172534 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 11 00:25:29.172546 systemd[1]: Detected virtualization kvm. Jul 11 00:25:29.172556 systemd[1]: Detected architecture arm64. Jul 11 00:25:29.172566 systemd[1]: Detected first boot. Jul 11 00:25:29.172579 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:25:29.172589 zram_generator::config[1061]: No configuration found. Jul 11 00:25:29.172602 systemd[1]: Populated /etc with preset unit settings. Jul 11 00:25:29.172612 systemd[1]: Queued start job for default target multi-user.target. Jul 11 00:25:29.172623 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 11 00:25:29.172638 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 11 00:25:29.172649 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 11 00:25:29.172659 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 11 00:25:29.172670 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 11 00:25:29.172685 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 11 00:25:29.172695 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 11 00:25:29.172707 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 11 00:25:29.172720 systemd[1]: Created slice user.slice - User and Session Slice. Jul 11 00:25:29.172730 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:25:29.172741 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:25:29.172751 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 11 00:25:29.172761 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 11 00:25:29.172772 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 11 00:25:29.172782 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:25:29.172792 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 11 00:25:29.172804 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:25:29.172816 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 11 00:25:29.172826 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:25:29.172837 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:25:29.172847 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:25:29.172857 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:25:29.172868 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 11 00:25:29.172878 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 11 00:25:29.172890 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 11 00:25:29.172900 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 11 00:25:29.172910 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:25:29.172921 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:25:29.172931 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:25:29.172941 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 11 00:25:29.172951 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 11 00:25:29.172961 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 11 00:25:29.172972 systemd[1]: Mounting media.mount - External Media Directory... Jul 11 00:25:29.172984 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 11 00:25:29.172994 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 11 00:25:29.173004 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 11 00:25:29.173015 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 11 00:25:29.173025 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:25:29.173037 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:25:29.173047 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 11 00:25:29.173058 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:25:29.173068 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:25:29.173080 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:25:29.173090 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 11 00:25:29.173101 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:25:29.173111 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 11 00:25:29.173122 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 11 00:25:29.173132 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 11 00:25:29.173142 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:25:29.173153 kernel: fuse: init (API version 7.39) Jul 11 00:25:29.173164 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:25:29.173174 kernel: loop: module loaded Jul 11 00:25:29.173192 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 00:25:29.173205 kernel: ACPI: bus type drm_connector registered Jul 11 00:25:29.173215 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 11 00:25:29.173226 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:25:29.173253 systemd-journald[1143]: Collecting audit messages is disabled. Jul 11 00:25:29.173274 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 11 00:25:29.173288 systemd-journald[1143]: Journal started Jul 11 00:25:29.173317 systemd-journald[1143]: Runtime Journal (/run/log/journal/157f5bffe28e4b5c8798b626e099994f) is 5.9M, max 47.3M, 41.4M free. Jul 11 00:25:29.175686 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:25:29.176654 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 11 00:25:29.177851 systemd[1]: Mounted media.mount - External Media Directory. Jul 11 00:25:29.178917 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 11 00:25:29.180165 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 11 00:25:29.181341 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 11 00:25:29.182597 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 11 00:25:29.184023 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:25:29.185542 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 11 00:25:29.185701 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 11 00:25:29.187062 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:25:29.187240 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:25:29.188551 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:25:29.188706 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:25:29.190097 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:25:29.190278 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:25:29.191676 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 11 00:25:29.191837 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 11 00:25:29.193139 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:25:29.193588 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:25:29.194970 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:25:29.196588 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 00:25:29.198072 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 11 00:25:29.209605 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 00:25:29.218290 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 11 00:25:29.220370 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 11 00:25:29.221520 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 11 00:25:29.223363 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 11 00:25:29.227859 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 11 00:25:29.230921 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:25:29.232157 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 11 00:25:29.233391 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:25:29.233999 systemd-journald[1143]: Time spent on flushing to /var/log/journal/157f5bffe28e4b5c8798b626e099994f is 20.349ms for 842 entries. Jul 11 00:25:29.233999 systemd-journald[1143]: System Journal (/var/log/journal/157f5bffe28e4b5c8798b626e099994f) is 8.0M, max 195.6M, 187.6M free. Jul 11 00:25:29.261444 systemd-journald[1143]: Received client request to flush runtime journal. Jul 11 00:25:29.236359 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:25:29.241269 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 00:25:29.243980 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:25:29.245517 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 11 00:25:29.249390 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 11 00:25:29.250849 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 11 00:25:29.253495 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 11 00:25:29.268199 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 11 00:25:29.269858 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 11 00:25:29.271583 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:25:29.278913 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Jul 11 00:25:29.278931 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Jul 11 00:25:29.279712 udevadm[1204]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 11 00:25:29.283055 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:25:29.295355 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 11 00:25:29.313938 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 11 00:25:29.326346 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:25:29.337326 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Jul 11 00:25:29.337344 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Jul 11 00:25:29.341002 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:25:29.653220 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 11 00:25:29.665332 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:25:29.683018 systemd-udevd[1223]: Using default interface naming scheme 'v255'. Jul 11 00:25:29.696670 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:25:29.708977 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:25:29.721346 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 11 00:25:29.724221 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jul 11 00:25:29.747250 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1224) Jul 11 00:25:29.771420 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:25:29.772984 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 11 00:25:29.824483 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:25:29.830788 systemd-networkd[1230]: lo: Link UP Jul 11 00:25:29.830797 systemd-networkd[1230]: lo: Gained carrier Jul 11 00:25:29.831497 systemd-networkd[1230]: Enumeration completed Jul 11 00:25:29.831653 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:25:29.831918 systemd-networkd[1230]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:25:29.831921 systemd-networkd[1230]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:25:29.832562 systemd-networkd[1230]: eth0: Link UP Jul 11 00:25:29.832567 systemd-networkd[1230]: eth0: Gained carrier Jul 11 00:25:29.832579 systemd-networkd[1230]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:25:29.833489 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 11 00:25:29.836615 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 11 00:25:29.838789 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 11 00:25:29.850165 systemd-networkd[1230]: eth0: DHCPv4 address 10.0.0.118/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:25:29.861668 lvm[1261]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:25:29.868782 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:25:29.896542 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 11 00:25:29.897940 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:25:29.909385 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 11 00:25:29.912593 lvm[1269]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:25:29.947544 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 11 00:25:29.948909 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 11 00:25:29.950147 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 11 00:25:29.950179 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:25:29.951156 systemd[1]: Reached target machines.target - Containers. Jul 11 00:25:29.953071 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 11 00:25:29.972334 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 11 00:25:29.974477 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 11 00:25:29.975552 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:25:29.976370 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 11 00:25:29.980326 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 11 00:25:29.982478 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 11 00:25:29.984263 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 11 00:25:29.994490 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 11 00:25:29.998276 kernel: loop0: detected capacity change from 0 to 114432 Jul 11 00:25:30.005384 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 11 00:25:30.006740 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 11 00:25:30.011202 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 11 00:25:30.045235 kernel: loop1: detected capacity change from 0 to 203944 Jul 11 00:25:30.076220 kernel: loop2: detected capacity change from 0 to 114328 Jul 11 00:25:30.114201 kernel: loop3: detected capacity change from 0 to 114432 Jul 11 00:25:30.119225 kernel: loop4: detected capacity change from 0 to 203944 Jul 11 00:25:30.125225 kernel: loop5: detected capacity change from 0 to 114328 Jul 11 00:25:30.129056 (sd-merge)[1289]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 11 00:25:30.129479 (sd-merge)[1289]: Merged extensions into '/usr'. Jul 11 00:25:30.132748 systemd[1]: Reloading requested from client PID 1277 ('systemd-sysext') (unit systemd-sysext.service)... Jul 11 00:25:30.132763 systemd[1]: Reloading... Jul 11 00:25:30.175278 zram_generator::config[1321]: No configuration found. Jul 11 00:25:30.223991 ldconfig[1274]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 11 00:25:30.267873 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:25:30.310834 systemd[1]: Reloading finished in 177 ms. Jul 11 00:25:30.328008 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 11 00:25:30.329520 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 11 00:25:30.349364 systemd[1]: Starting ensure-sysext.service... Jul 11 00:25:30.351261 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:25:30.356216 systemd[1]: Reloading requested from client PID 1359 ('systemctl') (unit ensure-sysext.service)... Jul 11 00:25:30.356230 systemd[1]: Reloading... Jul 11 00:25:30.366993 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 11 00:25:30.367309 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 11 00:25:30.367918 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 11 00:25:30.368139 systemd-tmpfiles[1360]: ACLs are not supported, ignoring. Jul 11 00:25:30.368211 systemd-tmpfiles[1360]: ACLs are not supported, ignoring. Jul 11 00:25:30.370453 systemd-tmpfiles[1360]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:25:30.370467 systemd-tmpfiles[1360]: Skipping /boot Jul 11 00:25:30.377152 systemd-tmpfiles[1360]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:25:30.377168 systemd-tmpfiles[1360]: Skipping /boot Jul 11 00:25:30.404236 zram_generator::config[1392]: No configuration found. Jul 11 00:25:30.487851 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:25:30.530251 systemd[1]: Reloading finished in 173 ms. Jul 11 00:25:30.544899 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:25:30.567344 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 11 00:25:30.570645 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 11 00:25:30.574365 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 11 00:25:30.579444 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:25:30.583425 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 11 00:25:30.592844 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:25:30.594418 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:25:30.599651 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:25:30.603545 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:25:30.604704 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:25:30.605389 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:25:30.605530 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:25:30.608906 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:25:30.609041 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:25:30.613657 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:25:30.613867 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:25:30.616090 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 11 00:25:30.620624 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:25:30.629500 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:25:30.635437 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:25:30.637594 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:25:30.639693 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:25:30.641584 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 11 00:25:30.643462 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 11 00:25:30.648396 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 11 00:25:30.649267 augenrules[1471]: No rules Jul 11 00:25:30.656485 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 11 00:25:30.658438 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:25:30.658595 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:25:30.660301 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:25:30.660445 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:25:30.660552 systemd-resolved[1436]: Positive Trust Anchors: Jul 11 00:25:30.660563 systemd-resolved[1436]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:25:30.660596 systemd-resolved[1436]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:25:30.662071 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:25:30.662338 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:25:30.668649 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 11 00:25:30.673850 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:25:30.673970 systemd-resolved[1436]: Defaulting to hostname 'linux'. Jul 11 00:25:30.680421 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:25:30.682544 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:25:30.684517 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:25:30.686597 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:25:30.687732 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:25:30.687875 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:25:30.688497 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:25:30.690109 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:25:30.690304 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:25:30.691825 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:25:30.691960 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:25:30.693491 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:25:30.693643 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:25:30.695205 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:25:30.695413 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:25:30.698302 systemd[1]: Finished ensure-sysext.service. Jul 11 00:25:30.702653 systemd[1]: Reached target network.target - Network. Jul 11 00:25:30.703571 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:25:30.704799 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:25:30.704862 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:25:30.714367 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 11 00:25:30.757083 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 11 00:25:31.193244 systemd-resolved[1436]: Clock change detected. Flushing caches. Jul 11 00:25:31.193281 systemd-timesyncd[1503]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 11 00:25:31.193324 systemd-timesyncd[1503]: Initial clock synchronization to Fri 2025-07-11 00:25:31.193180 UTC. Jul 11 00:25:31.193455 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:25:31.194592 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 11 00:25:31.195809 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 11 00:25:31.197002 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 11 00:25:31.198194 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 11 00:25:31.198232 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:25:31.199137 systemd[1]: Reached target time-set.target - System Time Set. Jul 11 00:25:31.200265 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 11 00:25:31.201392 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 11 00:25:31.202607 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:25:31.204218 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 11 00:25:31.206666 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 11 00:25:31.208985 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 11 00:25:31.214756 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 11 00:25:31.215845 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:25:31.216786 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:25:31.217882 systemd[1]: System is tainted: cgroupsv1 Jul 11 00:25:31.217925 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:25:31.217955 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:25:31.219018 systemd[1]: Starting containerd.service - containerd container runtime... Jul 11 00:25:31.220978 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 11 00:25:31.222859 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 11 00:25:31.227004 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 11 00:25:31.228124 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 11 00:25:31.232000 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 11 00:25:31.233950 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 11 00:25:31.238122 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 11 00:25:31.239524 jq[1509]: false Jul 11 00:25:31.244641 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 11 00:25:31.250291 extend-filesystems[1511]: Found loop3 Jul 11 00:25:31.251224 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 11 00:25:31.251384 extend-filesystems[1511]: Found loop4 Jul 11 00:25:31.256582 extend-filesystems[1511]: Found loop5 Jul 11 00:25:31.256582 extend-filesystems[1511]: Found vda Jul 11 00:25:31.256582 extend-filesystems[1511]: Found vda1 Jul 11 00:25:31.256582 extend-filesystems[1511]: Found vda2 Jul 11 00:25:31.256582 extend-filesystems[1511]: Found vda3 Jul 11 00:25:31.256582 extend-filesystems[1511]: Found usr Jul 11 00:25:31.256582 extend-filesystems[1511]: Found vda4 Jul 11 00:25:31.256582 extend-filesystems[1511]: Found vda6 Jul 11 00:25:31.256582 extend-filesystems[1511]: Found vda7 Jul 11 00:25:31.256582 extend-filesystems[1511]: Found vda9 Jul 11 00:25:31.256582 extend-filesystems[1511]: Checking size of /dev/vda9 Jul 11 00:25:31.256313 dbus-daemon[1508]: [system] SELinux support is enabled Jul 11 00:25:31.257488 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 11 00:25:31.281335 extend-filesystems[1511]: Resized partition /dev/vda9 Jul 11 00:25:31.282792 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 11 00:25:31.258529 systemd[1]: Starting update-engine.service - Update Engine... Jul 11 00:25:31.282999 extend-filesystems[1536]: resize2fs 1.47.1 (20-May-2024) Jul 11 00:25:31.264417 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 11 00:25:31.268553 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 11 00:25:31.277180 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 11 00:25:31.286218 jq[1529]: true Jul 11 00:25:31.277393 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 11 00:25:31.277626 systemd[1]: motdgen.service: Deactivated successfully. Jul 11 00:25:31.277809 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 11 00:25:31.288133 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 11 00:25:31.288490 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 11 00:25:31.290842 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1240) Jul 11 00:25:31.304621 jq[1541]: true Jul 11 00:25:31.320344 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 11 00:25:31.342555 tar[1539]: linux-arm64/helm Jul 11 00:25:31.342816 update_engine[1527]: I20250711 00:25:31.335595 1527 main.cc:92] Flatcar Update Engine starting Jul 11 00:25:31.342816 update_engine[1527]: I20250711 00:25:31.341300 1527 update_check_scheduler.cc:74] Next update check in 4m59s Jul 11 00:25:31.321157 (ntainerd)[1542]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 11 00:25:31.343249 extend-filesystems[1536]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 11 00:25:31.343249 extend-filesystems[1536]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 11 00:25:31.343249 extend-filesystems[1536]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 11 00:25:31.330259 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 11 00:25:31.351026 extend-filesystems[1511]: Resized filesystem in /dev/vda9 Jul 11 00:25:31.330290 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 11 00:25:31.332455 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 11 00:25:31.332472 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 11 00:25:31.341171 systemd[1]: Started update-engine.service - Update Engine. Jul 11 00:25:31.343262 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 11 00:25:31.353095 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 11 00:25:31.355197 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 11 00:25:31.355412 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 11 00:25:31.361982 systemd-logind[1524]: Watching system buttons on /dev/input/event0 (Power Button) Jul 11 00:25:31.365421 systemd-logind[1524]: New seat seat0. Jul 11 00:25:31.367579 systemd[1]: Started systemd-logind.service - User Login Management. Jul 11 00:25:31.372848 bash[1570]: Updated "/home/core/.ssh/authorized_keys" Jul 11 00:25:31.377699 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 11 00:25:31.380328 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 11 00:25:31.405745 locksmithd[1566]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 11 00:25:31.504044 systemd-networkd[1230]: eth0: Gained IPv6LL Jul 11 00:25:31.511059 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 11 00:25:31.512794 systemd[1]: Reached target network-online.target - Network is Online. Jul 11 00:25:31.532229 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 11 00:25:31.534861 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:25:31.538333 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 11 00:25:31.552862 containerd[1542]: time="2025-07-11T00:25:31.551185623Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 11 00:25:31.569818 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 11 00:25:31.570088 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 11 00:25:31.573473 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 11 00:25:31.585135 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 11 00:25:31.589439 containerd[1542]: time="2025-07-11T00:25:31.589389383Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:25:31.591002 containerd[1542]: time="2025-07-11T00:25:31.590901103Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:25:31.591002 containerd[1542]: time="2025-07-11T00:25:31.590943743Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 11 00:25:31.591002 containerd[1542]: time="2025-07-11T00:25:31.590960663Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 11 00:25:31.591128 containerd[1542]: time="2025-07-11T00:25:31.591103783Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 11 00:25:31.591154 containerd[1542]: time="2025-07-11T00:25:31.591127223Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 11 00:25:31.591202 containerd[1542]: time="2025-07-11T00:25:31.591182623Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:25:31.591226 containerd[1542]: time="2025-07-11T00:25:31.591199983Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:25:31.591409 containerd[1542]: time="2025-07-11T00:25:31.591386663Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:25:31.591409 containerd[1542]: time="2025-07-11T00:25:31.591406663Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 11 00:25:31.591457 containerd[1542]: time="2025-07-11T00:25:31.591419663Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:25:31.591457 containerd[1542]: time="2025-07-11T00:25:31.591429383Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 11 00:25:31.591511 containerd[1542]: time="2025-07-11T00:25:31.591494103Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:25:31.591692 containerd[1542]: time="2025-07-11T00:25:31.591672183Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:25:31.592419 containerd[1542]: time="2025-07-11T00:25:31.591798063Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:25:31.592419 containerd[1542]: time="2025-07-11T00:25:31.591816503Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 11 00:25:31.592419 containerd[1542]: time="2025-07-11T00:25:31.591909143Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 11 00:25:31.592419 containerd[1542]: time="2025-07-11T00:25:31.591963103Z" level=info msg="metadata content store policy set" policy=shared Jul 11 00:25:31.596555 containerd[1542]: time="2025-07-11T00:25:31.596519543Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 11 00:25:31.596633 containerd[1542]: time="2025-07-11T00:25:31.596570423Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 11 00:25:31.596633 containerd[1542]: time="2025-07-11T00:25:31.596586783Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 11 00:25:31.596633 containerd[1542]: time="2025-07-11T00:25:31.596607823Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 11 00:25:31.596633 containerd[1542]: time="2025-07-11T00:25:31.596622743Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 11 00:25:31.596776 containerd[1542]: time="2025-07-11T00:25:31.596752783Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 11 00:25:31.598288 containerd[1542]: time="2025-07-11T00:25:31.598240343Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 11 00:25:31.598408 containerd[1542]: time="2025-07-11T00:25:31.598383423Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 11 00:25:31.598437 containerd[1542]: time="2025-07-11T00:25:31.598407663Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 11 00:25:31.598437 containerd[1542]: time="2025-07-11T00:25:31.598422303Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 11 00:25:31.598472 containerd[1542]: time="2025-07-11T00:25:31.598453623Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 11 00:25:31.598472 containerd[1542]: time="2025-07-11T00:25:31.598467703Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 11 00:25:31.598505 containerd[1542]: time="2025-07-11T00:25:31.598480823Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 11 00:25:31.598505 containerd[1542]: time="2025-07-11T00:25:31.598495023Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 11 00:25:31.598544 containerd[1542]: time="2025-07-11T00:25:31.598515543Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 11 00:25:31.598544 containerd[1542]: time="2025-07-11T00:25:31.598529663Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 11 00:25:31.598544 containerd[1542]: time="2025-07-11T00:25:31.598542103Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 11 00:25:31.598592 containerd[1542]: time="2025-07-11T00:25:31.598554023Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 11 00:25:31.598592 containerd[1542]: time="2025-07-11T00:25:31.598574023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 11 00:25:31.598592 containerd[1542]: time="2025-07-11T00:25:31.598588143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 11 00:25:31.598645 containerd[1542]: time="2025-07-11T00:25:31.598600223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 11 00:25:31.598645 containerd[1542]: time="2025-07-11T00:25:31.598613023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 11 00:25:31.598645 containerd[1542]: time="2025-07-11T00:25:31.598624783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 11 00:25:31.598645 containerd[1542]: time="2025-07-11T00:25:31.598637343Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 11 00:25:31.598714 containerd[1542]: time="2025-07-11T00:25:31.598648743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 11 00:25:31.598714 containerd[1542]: time="2025-07-11T00:25:31.598661023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 11 00:25:31.598714 containerd[1542]: time="2025-07-11T00:25:31.598674063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 11 00:25:31.598714 containerd[1542]: time="2025-07-11T00:25:31.598687783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 11 00:25:31.598714 containerd[1542]: time="2025-07-11T00:25:31.598698143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 11 00:25:31.598714 containerd[1542]: time="2025-07-11T00:25:31.598710303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 11 00:25:31.598811 containerd[1542]: time="2025-07-11T00:25:31.598722103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 11 00:25:31.598811 containerd[1542]: time="2025-07-11T00:25:31.598737423Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 11 00:25:31.598811 containerd[1542]: time="2025-07-11T00:25:31.598756583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 11 00:25:31.598811 containerd[1542]: time="2025-07-11T00:25:31.598768183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 11 00:25:31.598811 containerd[1542]: time="2025-07-11T00:25:31.598778503Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 11 00:25:31.599608 containerd[1542]: time="2025-07-11T00:25:31.598935383Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 11 00:25:31.599608 containerd[1542]: time="2025-07-11T00:25:31.598956303Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 11 00:25:31.599608 containerd[1542]: time="2025-07-11T00:25:31.598967183Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 11 00:25:31.599608 containerd[1542]: time="2025-07-11T00:25:31.598979823Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 11 00:25:31.599608 containerd[1542]: time="2025-07-11T00:25:31.598989543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 11 00:25:31.599608 containerd[1542]: time="2025-07-11T00:25:31.599001463Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 11 00:25:31.599608 containerd[1542]: time="2025-07-11T00:25:31.599010823Z" level=info msg="NRI interface is disabled by configuration." Jul 11 00:25:31.599608 containerd[1542]: time="2025-07-11T00:25:31.599021543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 11 00:25:31.599759 containerd[1542]: time="2025-07-11T00:25:31.599357503Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 11 00:25:31.599759 containerd[1542]: time="2025-07-11T00:25:31.599417863Z" level=info msg="Connect containerd service" Jul 11 00:25:31.599759 containerd[1542]: time="2025-07-11T00:25:31.599449543Z" level=info msg="using legacy CRI server" Jul 11 00:25:31.599759 containerd[1542]: time="2025-07-11T00:25:31.599456103Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 11 00:25:31.599759 containerd[1542]: time="2025-07-11T00:25:31.599530143Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 11 00:25:31.600794 containerd[1542]: time="2025-07-11T00:25:31.600398063Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:25:31.602002 containerd[1542]: time="2025-07-11T00:25:31.600773183Z" level=info msg="Start subscribing containerd event" Jul 11 00:25:31.602002 containerd[1542]: time="2025-07-11T00:25:31.600843503Z" level=info msg="Start recovering state" Jul 11 00:25:31.602002 containerd[1542]: time="2025-07-11T00:25:31.600980583Z" level=info msg="Start event monitor" Jul 11 00:25:31.602002 containerd[1542]: time="2025-07-11T00:25:31.600997503Z" level=info msg="Start snapshots syncer" Jul 11 00:25:31.602002 containerd[1542]: time="2025-07-11T00:25:31.601008463Z" level=info msg="Start cni network conf syncer for default" Jul 11 00:25:31.602002 containerd[1542]: time="2025-07-11T00:25:31.601036343Z" level=info msg="Start streaming server" Jul 11 00:25:31.602002 containerd[1542]: time="2025-07-11T00:25:31.601324983Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 11 00:25:31.602002 containerd[1542]: time="2025-07-11T00:25:31.601373063Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 11 00:25:31.602002 containerd[1542]: time="2025-07-11T00:25:31.601432943Z" level=info msg="containerd successfully booted in 0.051555s" Jul 11 00:25:31.601530 systemd[1]: Started containerd.service - containerd container runtime. Jul 11 00:25:31.713510 tar[1539]: linux-arm64/LICENSE Jul 11 00:25:31.713510 tar[1539]: linux-arm64/README.md Jul 11 00:25:31.726017 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 11 00:25:31.814148 sshd_keygen[1535]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 11 00:25:31.833354 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 11 00:25:31.849133 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 11 00:25:31.856100 systemd[1]: issuegen.service: Deactivated successfully. Jul 11 00:25:31.856328 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 11 00:25:31.859326 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 11 00:25:31.870849 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 11 00:25:31.873939 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 11 00:25:31.876118 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 11 00:25:31.877561 systemd[1]: Reached target getty.target - Login Prompts. Jul 11 00:25:32.117978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:25:32.119465 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 11 00:25:32.120658 systemd[1]: Startup finished in 5.505s (kernel) + 3.078s (userspace) = 8.583s. Jul 11 00:25:32.121522 (kubelet)[1644]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:25:32.533276 kubelet[1644]: E0711 00:25:32.533164 1644 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:25:32.535451 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:25:32.535636 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:25:36.404055 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 11 00:25:36.418040 systemd[1]: Started sshd@0-10.0.0.118:22-10.0.0.1:44074.service - OpenSSH per-connection server daemon (10.0.0.1:44074). Jul 11 00:25:36.492146 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 44074 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:25:36.494214 sshd[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:36.505797 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 11 00:25:36.515081 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 11 00:25:36.516959 systemd-logind[1524]: New session 1 of user core. Jul 11 00:25:36.524931 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 11 00:25:36.527655 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 11 00:25:36.533363 (systemd)[1663]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:25:36.609780 systemd[1663]: Queued start job for default target default.target. Jul 11 00:25:36.610424 systemd[1663]: Created slice app.slice - User Application Slice. Jul 11 00:25:36.610449 systemd[1663]: Reached target paths.target - Paths. Jul 11 00:25:36.610461 systemd[1663]: Reached target timers.target - Timers. Jul 11 00:25:36.623945 systemd[1663]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 11 00:25:36.629387 systemd[1663]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 11 00:25:36.629446 systemd[1663]: Reached target sockets.target - Sockets. Jul 11 00:25:36.629458 systemd[1663]: Reached target basic.target - Basic System. Jul 11 00:25:36.629492 systemd[1663]: Reached target default.target - Main User Target. Jul 11 00:25:36.629518 systemd[1663]: Startup finished in 91ms. Jul 11 00:25:36.629845 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 11 00:25:36.631337 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 11 00:25:36.690071 systemd[1]: Started sshd@1-10.0.0.118:22-10.0.0.1:44080.service - OpenSSH per-connection server daemon (10.0.0.1:44080). Jul 11 00:25:36.729420 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 44080 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:25:36.730660 sshd[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:36.734587 systemd-logind[1524]: New session 2 of user core. Jul 11 00:25:36.744172 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 11 00:25:36.795983 sshd[1675]: pam_unix(sshd:session): session closed for user core Jul 11 00:25:36.813069 systemd[1]: Started sshd@2-10.0.0.118:22-10.0.0.1:44096.service - OpenSSH per-connection server daemon (10.0.0.1:44096). Jul 11 00:25:36.813439 systemd[1]: sshd@1-10.0.0.118:22-10.0.0.1:44080.service: Deactivated successfully. Jul 11 00:25:36.815839 systemd[1]: session-2.scope: Deactivated successfully. Jul 11 00:25:36.816002 systemd-logind[1524]: Session 2 logged out. Waiting for processes to exit. Jul 11 00:25:36.818322 systemd-logind[1524]: Removed session 2. Jul 11 00:25:36.841448 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 44096 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:25:36.842603 sshd[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:36.846137 systemd-logind[1524]: New session 3 of user core. Jul 11 00:25:36.857105 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 11 00:25:36.905272 sshd[1680]: pam_unix(sshd:session): session closed for user core Jul 11 00:25:36.915143 systemd[1]: Started sshd@3-10.0.0.118:22-10.0.0.1:44098.service - OpenSSH per-connection server daemon (10.0.0.1:44098). Jul 11 00:25:36.915606 systemd[1]: sshd@2-10.0.0.118:22-10.0.0.1:44096.service: Deactivated successfully. Jul 11 00:25:36.916992 systemd[1]: session-3.scope: Deactivated successfully. Jul 11 00:25:36.917564 systemd-logind[1524]: Session 3 logged out. Waiting for processes to exit. Jul 11 00:25:36.918642 systemd-logind[1524]: Removed session 3. Jul 11 00:25:36.944212 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 44098 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:25:36.945384 sshd[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:36.950026 systemd-logind[1524]: New session 4 of user core. Jul 11 00:25:36.960112 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 11 00:25:37.014926 sshd[1688]: pam_unix(sshd:session): session closed for user core Jul 11 00:25:37.030083 systemd[1]: Started sshd@4-10.0.0.118:22-10.0.0.1:44112.service - OpenSSH per-connection server daemon (10.0.0.1:44112). Jul 11 00:25:37.030501 systemd[1]: sshd@3-10.0.0.118:22-10.0.0.1:44098.service: Deactivated successfully. Jul 11 00:25:37.031957 systemd[1]: session-4.scope: Deactivated successfully. Jul 11 00:25:37.032544 systemd-logind[1524]: Session 4 logged out. Waiting for processes to exit. Jul 11 00:25:37.033948 systemd-logind[1524]: Removed session 4. Jul 11 00:25:37.059713 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 44112 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:25:37.060883 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:37.064981 systemd-logind[1524]: New session 5 of user core. Jul 11 00:25:37.071095 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 11 00:25:37.142381 sudo[1703]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 11 00:25:37.142680 sudo[1703]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:25:37.156684 sudo[1703]: pam_unix(sudo:session): session closed for user root Jul 11 00:25:37.159338 sshd[1697]: pam_unix(sshd:session): session closed for user core Jul 11 00:25:37.169076 systemd[1]: Started sshd@5-10.0.0.118:22-10.0.0.1:44118.service - OpenSSH per-connection server daemon (10.0.0.1:44118). Jul 11 00:25:37.169457 systemd[1]: sshd@4-10.0.0.118:22-10.0.0.1:44112.service: Deactivated successfully. Jul 11 00:25:37.171673 systemd-logind[1524]: Session 5 logged out. Waiting for processes to exit. Jul 11 00:25:37.172380 systemd[1]: session-5.scope: Deactivated successfully. Jul 11 00:25:37.173618 systemd-logind[1524]: Removed session 5. Jul 11 00:25:37.198755 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 44118 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:25:37.200133 sshd[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:37.204380 systemd-logind[1524]: New session 6 of user core. Jul 11 00:25:37.214141 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 11 00:25:37.267993 sudo[1713]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 11 00:25:37.268279 sudo[1713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:25:37.271471 sudo[1713]: pam_unix(sudo:session): session closed for user root Jul 11 00:25:37.276140 sudo[1712]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 11 00:25:37.276415 sudo[1712]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:25:37.292055 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 11 00:25:37.293250 auditctl[1716]: No rules Jul 11 00:25:37.294098 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 00:25:37.294342 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 11 00:25:37.296023 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 11 00:25:37.319437 augenrules[1735]: No rules Jul 11 00:25:37.320780 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 11 00:25:37.322227 sudo[1712]: pam_unix(sudo:session): session closed for user root Jul 11 00:25:37.323800 sshd[1705]: pam_unix(sshd:session): session closed for user core Jul 11 00:25:37.336081 systemd[1]: Started sshd@6-10.0.0.118:22-10.0.0.1:44122.service - OpenSSH per-connection server daemon (10.0.0.1:44122). Jul 11 00:25:37.336473 systemd[1]: sshd@5-10.0.0.118:22-10.0.0.1:44118.service: Deactivated successfully. Jul 11 00:25:37.338530 systemd-logind[1524]: Session 6 logged out. Waiting for processes to exit. Jul 11 00:25:37.339097 systemd[1]: session-6.scope: Deactivated successfully. Jul 11 00:25:37.340491 systemd-logind[1524]: Removed session 6. Jul 11 00:25:37.368609 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 44122 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:25:37.370049 sshd[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:25:37.373885 systemd-logind[1524]: New session 7 of user core. Jul 11 00:25:37.384109 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 11 00:25:37.435359 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 11 00:25:37.435639 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:25:37.743047 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 11 00:25:37.743245 (dockerd)[1767]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 11 00:25:38.010714 dockerd[1767]: time="2025-07-11T00:25:38.010586623Z" level=info msg="Starting up" Jul 11 00:25:38.266577 dockerd[1767]: time="2025-07-11T00:25:38.266472543Z" level=info msg="Loading containers: start." Jul 11 00:25:38.358855 kernel: Initializing XFRM netlink socket Jul 11 00:25:38.417754 systemd-networkd[1230]: docker0: Link UP Jul 11 00:25:38.440132 dockerd[1767]: time="2025-07-11T00:25:38.440087343Z" level=info msg="Loading containers: done." Jul 11 00:25:38.451294 dockerd[1767]: time="2025-07-11T00:25:38.451201183Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 11 00:25:38.451459 dockerd[1767]: time="2025-07-11T00:25:38.451299303Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 11 00:25:38.451459 dockerd[1767]: time="2025-07-11T00:25:38.451397263Z" level=info msg="Daemon has completed initialization" Jul 11 00:25:38.480770 dockerd[1767]: time="2025-07-11T00:25:38.480638263Z" level=info msg="API listen on /run/docker.sock" Jul 11 00:25:38.480919 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 11 00:25:39.037573 containerd[1542]: time="2025-07-11T00:25:39.037532143Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 11 00:25:39.633270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1192338890.mount: Deactivated successfully. Jul 11 00:25:40.486119 containerd[1542]: time="2025-07-11T00:25:40.486049503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:25:40.486648 containerd[1542]: time="2025-07-11T00:25:40.486611863Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651795" Jul 11 00:25:40.487432 containerd[1542]: time="2025-07-11T00:25:40.487398543Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:25:40.490334 containerd[1542]: time="2025-07-11T00:25:40.490288983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:25:40.491960 containerd[1542]: time="2025-07-11T00:25:40.491916783Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 1.45432988s" Jul 11 00:25:40.491960 containerd[1542]: time="2025-07-11T00:25:40.491959903Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 11 00:25:40.494987 containerd[1542]: time="2025-07-11T00:25:40.494953343Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 11 00:25:41.509057 containerd[1542]: time="2025-07-11T00:25:41.509006103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:25:41.509604 containerd[1542]: time="2025-07-11T00:25:41.509570063Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459679" Jul 11 00:25:41.510500 containerd[1542]: time="2025-07-11T00:25:41.510464543Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:25:41.513501 containerd[1542]: time="2025-07-11T00:25:41.513464103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:25:41.514641 containerd[1542]: time="2025-07-11T00:25:41.514612423Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.01962124s" Jul 11 00:25:41.514685 containerd[1542]: time="2025-07-11T00:25:41.514643383Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 11 00:25:41.515115 containerd[1542]: time="2025-07-11T00:25:41.515085463Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 11 00:25:42.503207 containerd[1542]: time="2025-07-11T00:25:42.503158063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:25:42.503969 containerd[1542]: time="2025-07-11T00:25:42.503722143Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125068" Jul 11 00:25:42.504848 containerd[1542]: time="2025-07-11T00:25:42.504797783Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:25:42.507708 containerd[1542]: time="2025-07-11T00:25:42.507674143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:25:42.509056 containerd[1542]: time="2025-07-11T00:25:42.509015063Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 993.79416ms" Jul 11 00:25:42.509131 containerd[1542]: time="2025-07-11T00:25:42.509058583Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 11 00:25:42.509976 containerd[1542]: time="2025-07-11T00:25:42.509744703Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 11 00:25:42.681213 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 11 00:25:42.692088 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:25:42.799156 (kubelet)[1989]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:25:42.799256 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:25:42.847327 kubelet[1989]: E0711 00:25:42.847283 1989 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:25:42.850172 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:25:42.850352 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:25:43.525958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount410098758.mount: Deactivated successfully. Jul 11 00:25:43.872015 containerd[1542]: time="2025-07-11T00:25:43.871843743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:25:43.872914 containerd[1542]: time="2025-07-11T00:25:43.872857023Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915959" Jul 11 00:25:43.873590 containerd[1542]: time="2025-07-11T00:25:43.873441743Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:25:43.876137 containerd[1542]: time="2025-07-11T00:25:43.876071863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:25:43.876778 containerd[1542]: time="2025-07-11T00:25:43.876700183Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.36692116s" Jul 11 00:25:43.876778 containerd[1542]: time="2025-07-11T00:25:43.876731623Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 11 00:25:43.877227 containerd[1542]: time="2025-07-11T00:25:43.877181703Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 11 00:25:44.426226 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4059030246.mount: Deactivated successfully. Jul 11 00:25:45.127586 containerd[1542]: time="2025-07-11T00:25:45.127533943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:25:45.128215 containerd[1542]: time="2025-07-11T00:25:45.128176903Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 11 00:25:45.128930 containerd[1542]: time="2025-07-11T00:25:45.128902783Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:25:45.132449 containerd[1542]: time="2025-07-11T00:25:45.132409463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:25:45.133790 containerd[1542]: time="2025-07-11T00:25:45.133711823Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.2564738s" Jul 11 00:25:45.133790 containerd[1542]: time="2025-07-11T00:25:45.133745503Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 11 00:25:45.134699 containerd[1542]: time="2025-07-11T00:25:45.134676263Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 11 00:25:45.708892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4040338283.mount: Deactivated successfully. Jul 11 00:25:45.713524 containerd[1542]: time="2025-07-11T00:25:45.713466543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:25:45.714138 containerd[1542]: time="2025-07-11T00:25:45.714103303Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 11 00:25:45.714770 containerd[1542]: time="2025-07-11T00:25:45.714734543Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:25:45.717312 containerd[1542]: time="2025-07-11T00:25:45.717273183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:25:45.718008 containerd[1542]: time="2025-07-11T00:25:45.717966783Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 583.18752ms" Jul 11 00:25:45.718052 containerd[1542]: time="2025-07-11T00:25:45.718007783Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 11 00:25:45.718494 containerd[1542]: time="2025-07-11T00:25:45.718473543Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 11 00:25:46.264476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3334626145.mount: Deactivated successfully. Jul 11 00:25:48.007528 containerd[1542]: time="2025-07-11T00:25:48.007467863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:25:48.008614 containerd[1542]: time="2025-07-11T00:25:48.008563943Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Jul 11 00:25:48.009333 containerd[1542]: time="2025-07-11T00:25:48.009300663Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:25:48.013033 containerd[1542]: time="2025-07-11T00:25:48.012994343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:25:48.014756 containerd[1542]: time="2025-07-11T00:25:48.014720823Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.29621812s" Jul 11 00:25:48.014790 containerd[1542]: time="2025-07-11T00:25:48.014760383Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 11 00:25:52.696969 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:25:52.707032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:25:52.727914 systemd[1]: Reloading requested from client PID 2147 ('systemctl') (unit session-7.scope)... Jul 11 00:25:52.728062 systemd[1]: Reloading... Jul 11 00:25:52.788002 zram_generator::config[2183]: No configuration found. Jul 11 00:25:52.888338 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:25:52.941470 systemd[1]: Reloading finished in 213 ms. Jul 11 00:25:52.972988 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 11 00:25:52.973054 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 11 00:25:52.973320 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:25:52.975698 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:25:53.083494 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:25:53.088441 (kubelet)[2243]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:25:53.152150 kubelet[2243]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:25:53.152150 kubelet[2243]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 11 00:25:53.152150 kubelet[2243]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:25:53.152526 kubelet[2243]: I0711 00:25:53.152196 2243 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:25:53.595508 kubelet[2243]: I0711 00:25:53.595321 2243 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 11 00:25:53.595508 kubelet[2243]: I0711 00:25:53.595349 2243 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:25:53.596143 kubelet[2243]: I0711 00:25:53.595962 2243 server.go:934] "Client rotation is on, will bootstrap in background" Jul 11 00:25:53.638159 kubelet[2243]: E0711 00:25:53.638116 2243 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.118:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:25:53.644134 kubelet[2243]: I0711 00:25:53.644068 2243 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:25:53.653737 kubelet[2243]: E0711 00:25:53.653704 2243 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:25:53.653737 kubelet[2243]: I0711 00:25:53.653732 2243 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:25:53.657338 kubelet[2243]: I0711 00:25:53.657311 2243 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:25:53.657872 kubelet[2243]: I0711 00:25:53.657845 2243 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 11 00:25:53.657996 kubelet[2243]: I0711 00:25:53.657962 2243 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:25:53.658151 kubelet[2243]: I0711 00:25:53.657990 2243 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 11 00:25:53.658233 kubelet[2243]: I0711 00:25:53.658209 2243 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:25:53.658233 kubelet[2243]: I0711 00:25:53.658218 2243 container_manager_linux.go:300] "Creating device plugin manager" Jul 11 00:25:53.658412 kubelet[2243]: I0711 00:25:53.658388 2243 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:25:53.662105 kubelet[2243]: I0711 00:25:53.661981 2243 kubelet.go:408] "Attempting to sync node with API server" Jul 11 00:25:53.662105 kubelet[2243]: I0711 00:25:53.662012 2243 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:25:53.662105 kubelet[2243]: I0711 00:25:53.662031 2243 kubelet.go:314] "Adding apiserver pod source" Jul 11 00:25:53.662105 kubelet[2243]: I0711 00:25:53.662107 2243 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:25:53.664184 kubelet[2243]: W0711 00:25:53.664095 2243 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Jul 11 00:25:53.664184 kubelet[2243]: E0711 00:25:53.664153 2243 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:25:53.664184 kubelet[2243]: W0711 00:25:53.664152 2243 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Jul 11 00:25:53.664299 kubelet[2243]: E0711 00:25:53.664197 2243 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:25:53.671658 kubelet[2243]: I0711 00:25:53.671207 2243 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 11 00:25:53.671999 kubelet[2243]: I0711 00:25:53.671977 2243 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:25:53.672164 kubelet[2243]: W0711 00:25:53.672147 2243 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 11 00:25:53.673230 kubelet[2243]: I0711 00:25:53.673202 2243 server.go:1274] "Started kubelet" Jul 11 00:25:53.677334 kubelet[2243]: I0711 00:25:53.677292 2243 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:25:53.677402 kubelet[2243]: I0711 00:25:53.674286 2243 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:25:53.678411 kubelet[2243]: I0711 00:25:53.678387 2243 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:25:53.678772 kubelet[2243]: I0711 00:25:53.678740 2243 server.go:449] "Adding debug handlers to kubelet server" Jul 11 00:25:53.679336 kubelet[2243]: I0711 00:25:53.679304 2243 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:25:53.679743 kubelet[2243]: I0711 00:25:53.679608 2243 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:25:53.679743 kubelet[2243]: I0711 00:25:53.679656 2243 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 11 00:25:53.679743 kubelet[2243]: I0711 00:25:53.679732 2243 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 11 00:25:53.679863 kubelet[2243]: I0711 00:25:53.679789 2243 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:25:53.680364 kubelet[2243]: E0711 00:25:53.678913 2243 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.118:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.118:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18510ac0e86b1cc7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 00:25:53.673166023 +0000 UTC m=+0.581620681,LastTimestamp:2025-07-11 00:25:53.673166023 +0000 UTC m=+0.581620681,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 00:25:53.680364 kubelet[2243]: W0711 00:25:53.680112 2243 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Jul 11 00:25:53.680364 kubelet[2243]: E0711 00:25:53.680157 2243 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:25:53.680364 kubelet[2243]: E0711 00:25:53.680273 2243 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:25:53.681339 kubelet[2243]: I0711 00:25:53.680953 2243 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:25:53.681339 kubelet[2243]: I0711 00:25:53.681033 2243 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:25:53.681339 kubelet[2243]: E0711 00:25:53.681161 2243 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="200ms" Jul 11 00:25:53.681339 kubelet[2243]: E0711 00:25:53.681222 2243 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:25:53.682071 kubelet[2243]: I0711 00:25:53.682052 2243 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:25:53.700531 kubelet[2243]: I0711 00:25:53.700390 2243 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:25:53.700531 kubelet[2243]: I0711 00:25:53.700492 2243 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 11 00:25:53.700531 kubelet[2243]: I0711 00:25:53.700507 2243 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 11 00:25:53.700531 kubelet[2243]: I0711 00:25:53.700524 2243 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:25:53.704014 kubelet[2243]: I0711 00:25:53.703973 2243 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:25:53.704014 kubelet[2243]: I0711 00:25:53.704015 2243 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 11 00:25:53.704110 kubelet[2243]: I0711 00:25:53.704046 2243 kubelet.go:2321] "Starting kubelet main sync loop" Jul 11 00:25:53.704274 kubelet[2243]: E0711 00:25:53.704204 2243 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:25:53.705016 kubelet[2243]: W0711 00:25:53.704977 2243 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Jul 11 00:25:53.705069 kubelet[2243]: E0711 00:25:53.705029 2243 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:25:53.705177 kubelet[2243]: I0711 00:25:53.705161 2243 policy_none.go:49] "None policy: Start" Jul 11 00:25:53.705872 kubelet[2243]: I0711 00:25:53.705775 2243 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 11 00:25:53.705872 kubelet[2243]: I0711 00:25:53.705801 2243 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:25:53.710918 kubelet[2243]: I0711 00:25:53.710885 2243 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:25:53.711075 kubelet[2243]: I0711 00:25:53.711052 2243 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:25:53.711112 kubelet[2243]: I0711 00:25:53.711069 2243 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:25:53.711708 kubelet[2243]: I0711 00:25:53.711682 2243 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:25:53.713433 kubelet[2243]: E0711 00:25:53.712786 2243 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 11 00:25:53.815307 kubelet[2243]: I0711 00:25:53.813967 2243 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:25:53.815307 kubelet[2243]: E0711 00:25:53.814488 2243 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Jul 11 00:25:53.883315 kubelet[2243]: E0711 00:25:53.882241 2243 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="400ms" Jul 11 00:25:53.980739 kubelet[2243]: I0711 00:25:53.980619 2243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/072ea9e701e8506f22ac8d7264d6990d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"072ea9e701e8506f22ac8d7264d6990d\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:25:53.980739 kubelet[2243]: I0711 00:25:53.980670 2243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/072ea9e701e8506f22ac8d7264d6990d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"072ea9e701e8506f22ac8d7264d6990d\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:25:53.980739 kubelet[2243]: I0711 00:25:53.980694 2243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:25:53.980739 kubelet[2243]: I0711 00:25:53.980711 2243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:25:53.981015 kubelet[2243]: I0711 00:25:53.980765 2243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:25:53.981015 kubelet[2243]: I0711 00:25:53.980806 2243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/072ea9e701e8506f22ac8d7264d6990d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"072ea9e701e8506f22ac8d7264d6990d\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:25:53.981015 kubelet[2243]: I0711 00:25:53.980866 2243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:25:53.981015 kubelet[2243]: I0711 00:25:53.980901 2243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:25:53.981015 kubelet[2243]: I0711 00:25:53.980921 2243 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:25:54.016065 kubelet[2243]: I0711 00:25:54.016011 2243 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:25:54.016404 kubelet[2243]: E0711 00:25:54.016372 2243 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Jul 11 00:25:54.112273 kubelet[2243]: E0711 00:25:54.112187 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:54.112865 containerd[1542]: time="2025-07-11T00:25:54.112802263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 11 00:25:54.113941 kubelet[2243]: E0711 00:25:54.113919 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:54.114254 containerd[1542]: time="2025-07-11T00:25:54.114198463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:072ea9e701e8506f22ac8d7264d6990d,Namespace:kube-system,Attempt:0,}" Jul 11 00:25:54.115495 kubelet[2243]: E0711 00:25:54.115416 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:54.115861 containerd[1542]: time="2025-07-11T00:25:54.115720983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 11 00:25:54.283591 kubelet[2243]: E0711 00:25:54.283546 2243 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="800ms" Jul 11 00:25:54.418171 kubelet[2243]: I0711 00:25:54.418120 2243 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:25:54.418467 kubelet[2243]: E0711 00:25:54.418429 2243 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.118:6443/api/v1/nodes\": dial tcp 10.0.0.118:6443: connect: connection refused" node="localhost" Jul 11 00:25:54.625714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2733924537.mount: Deactivated successfully. Jul 11 00:25:54.630283 containerd[1542]: time="2025-07-11T00:25:54.630215223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:25:54.631420 containerd[1542]: time="2025-07-11T00:25:54.631358543Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:25:54.632186 containerd[1542]: time="2025-07-11T00:25:54.632135263Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 11 00:25:54.633178 containerd[1542]: time="2025-07-11T00:25:54.633096383Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 11 00:25:54.633758 containerd[1542]: time="2025-07-11T00:25:54.633711103Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:25:54.635319 containerd[1542]: time="2025-07-11T00:25:54.635255943Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 11 00:25:54.635588 containerd[1542]: time="2025-07-11T00:25:54.635557903Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:25:54.640368 containerd[1542]: time="2025-07-11T00:25:54.640295823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:25:54.641308 containerd[1542]: time="2025-07-11T00:25:54.641280703Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 526.99956ms" Jul 11 00:25:54.642119 containerd[1542]: time="2025-07-11T00:25:54.642091583Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 528.86768ms" Jul 11 00:25:54.644745 containerd[1542]: time="2025-07-11T00:25:54.644718063Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 528.94288ms" Jul 11 00:25:54.656960 kubelet[2243]: W0711 00:25:54.656902 2243 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Jul 11 00:25:54.657223 kubelet[2243]: E0711 00:25:54.656969 2243 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:25:54.759867 kubelet[2243]: W0711 00:25:54.754757 2243 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Jul 11 00:25:54.759867 kubelet[2243]: E0711 00:25:54.754857 2243 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:25:54.775639 containerd[1542]: time="2025-07-11T00:25:54.775541743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:25:54.775639 containerd[1542]: time="2025-07-11T00:25:54.775602983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:25:54.775639 containerd[1542]: time="2025-07-11T00:25:54.775615023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:25:54.775894 containerd[1542]: time="2025-07-11T00:25:54.775696103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:25:54.775894 containerd[1542]: time="2025-07-11T00:25:54.775767383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:25:54.775894 containerd[1542]: time="2025-07-11T00:25:54.775798743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:25:54.775969 containerd[1542]: time="2025-07-11T00:25:54.775809503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:25:54.775969 containerd[1542]: time="2025-07-11T00:25:54.775902743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:25:54.777321 containerd[1542]: time="2025-07-11T00:25:54.777159183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:25:54.777321 containerd[1542]: time="2025-07-11T00:25:54.777196663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:25:54.777321 containerd[1542]: time="2025-07-11T00:25:54.777207383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:25:54.777321 containerd[1542]: time="2025-07-11T00:25:54.777268983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:25:54.829442 containerd[1542]: time="2025-07-11T00:25:54.829396703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:072ea9e701e8506f22ac8d7264d6990d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b62861613294c020ad50df42541875a8c4bead8f3f900c0552dce0c27eb3b6a2\"" Jul 11 00:25:54.830059 containerd[1542]: time="2025-07-11T00:25:54.830030223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f41a26df6b9f35d3693ae37de07fc7d6215285c782a559e24538c82a081c787\"" Jul 11 00:25:54.830364 containerd[1542]: time="2025-07-11T00:25:54.830272423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"12ad084e7c2dbf2fda4855ee3530ff70efd8d1419f0e6b5aa6030fe9160e49fd\"" Jul 11 00:25:54.831021 kubelet[2243]: E0711 00:25:54.830999 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:54.831109 kubelet[2243]: E0711 00:25:54.831054 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:54.831214 kubelet[2243]: E0711 00:25:54.831194 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:54.833573 containerd[1542]: time="2025-07-11T00:25:54.833519583Z" level=info msg="CreateContainer within sandbox \"5f41a26df6b9f35d3693ae37de07fc7d6215285c782a559e24538c82a081c787\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 11 00:25:54.833636 containerd[1542]: time="2025-07-11T00:25:54.833580463Z" level=info msg="CreateContainer within sandbox \"b62861613294c020ad50df42541875a8c4bead8f3f900c0552dce0c27eb3b6a2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 11 00:25:54.833636 containerd[1542]: time="2025-07-11T00:25:54.833531823Z" level=info msg="CreateContainer within sandbox \"12ad084e7c2dbf2fda4855ee3530ff70efd8d1419f0e6b5aa6030fe9160e49fd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 11 00:25:54.854986 containerd[1542]: time="2025-07-11T00:25:54.854921223Z" level=info msg="CreateContainer within sandbox \"5f41a26df6b9f35d3693ae37de07fc7d6215285c782a559e24538c82a081c787\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e22bdab4c6944a570389ac6f5a42b14dd16bd54d18b4c97a80228d500f6d19be\"" Jul 11 00:25:54.855583 containerd[1542]: time="2025-07-11T00:25:54.855542983Z" level=info msg="StartContainer for \"e22bdab4c6944a570389ac6f5a42b14dd16bd54d18b4c97a80228d500f6d19be\"" Jul 11 00:25:54.856863 containerd[1542]: time="2025-07-11T00:25:54.856753063Z" level=info msg="CreateContainer within sandbox \"12ad084e7c2dbf2fda4855ee3530ff70efd8d1419f0e6b5aa6030fe9160e49fd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"737cc37979bf01a69bc015860339d41d657acb8066eee46aeb7788eb31cf1207\"" Jul 11 00:25:54.857198 containerd[1542]: time="2025-07-11T00:25:54.857164303Z" level=info msg="StartContainer for \"737cc37979bf01a69bc015860339d41d657acb8066eee46aeb7788eb31cf1207\"" Jul 11 00:25:54.857587 containerd[1542]: time="2025-07-11T00:25:54.857504943Z" level=info msg="CreateContainer within sandbox \"b62861613294c020ad50df42541875a8c4bead8f3f900c0552dce0c27eb3b6a2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3e55aee53d67c1d4519979461642dd1fe7b0bb48c3e50d33bbebe3d696a190a3\"" Jul 11 00:25:54.857882 containerd[1542]: time="2025-07-11T00:25:54.857821543Z" level=info msg="StartContainer for \"3e55aee53d67c1d4519979461642dd1fe7b0bb48c3e50d33bbebe3d696a190a3\"" Jul 11 00:25:54.916069 containerd[1542]: time="2025-07-11T00:25:54.914686863Z" level=info msg="StartContainer for \"737cc37979bf01a69bc015860339d41d657acb8066eee46aeb7788eb31cf1207\" returns successfully" Jul 11 00:25:54.916069 containerd[1542]: time="2025-07-11T00:25:54.914799863Z" level=info msg="StartContainer for \"e22bdab4c6944a570389ac6f5a42b14dd16bd54d18b4c97a80228d500f6d19be\" returns successfully" Jul 11 00:25:54.927341 containerd[1542]: time="2025-07-11T00:25:54.927298343Z" level=info msg="StartContainer for \"3e55aee53d67c1d4519979461642dd1fe7b0bb48c3e50d33bbebe3d696a190a3\" returns successfully" Jul 11 00:25:55.052777 kubelet[2243]: W0711 00:25:55.052717 2243 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.118:6443: connect: connection refused Jul 11 00:25:55.053410 kubelet[2243]: E0711 00:25:55.052787 2243 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.118:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:25:55.084871 kubelet[2243]: E0711 00:25:55.084799 2243 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.118:6443: connect: connection refused" interval="1.6s" Jul 11 00:25:55.220265 kubelet[2243]: I0711 00:25:55.220030 2243 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:25:55.717767 kubelet[2243]: E0711 00:25:55.717663 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:55.723634 kubelet[2243]: E0711 00:25:55.723445 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:55.724209 kubelet[2243]: E0711 00:25:55.724147 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:56.316562 kubelet[2243]: I0711 00:25:56.316439 2243 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 11 00:25:56.316562 kubelet[2243]: E0711 00:25:56.316474 2243 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 11 00:25:56.334479 kubelet[2243]: E0711 00:25:56.334425 2243 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:25:56.434903 kubelet[2243]: E0711 00:25:56.434858 2243 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:25:56.535409 kubelet[2243]: E0711 00:25:56.535364 2243 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:25:56.636374 kubelet[2243]: E0711 00:25:56.635853 2243 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:25:56.725565 kubelet[2243]: E0711 00:25:56.725377 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:56.725565 kubelet[2243]: E0711 00:25:56.725416 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:56.736773 kubelet[2243]: E0711 00:25:56.736709 2243 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:25:56.837257 kubelet[2243]: E0711 00:25:56.837207 2243 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:25:56.937667 kubelet[2243]: E0711 00:25:56.937632 2243 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:25:57.666145 kubelet[2243]: I0711 00:25:57.666066 2243 apiserver.go:52] "Watching apiserver" Jul 11 00:25:57.679918 kubelet[2243]: I0711 00:25:57.679881 2243 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 11 00:25:57.732405 kubelet[2243]: E0711 00:25:57.732374 2243 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:58.128839 systemd[1]: Reloading requested from client PID 2518 ('systemctl') (unit session-7.scope)... Jul 11 00:25:58.128854 systemd[1]: Reloading... Jul 11 00:25:58.185864 zram_generator::config[2560]: No configuration found. Jul 11 00:25:58.277413 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:25:58.336556 systemd[1]: Reloading finished in 207 ms. Jul 11 00:25:58.365544 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:25:58.383855 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 00:25:58.384185 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:25:58.398072 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:25:58.502606 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:25:58.507642 (kubelet)[2609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:25:58.548996 kubelet[2609]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:25:58.548996 kubelet[2609]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 11 00:25:58.548996 kubelet[2609]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:25:58.549530 kubelet[2609]: I0711 00:25:58.549049 2609 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:25:58.555124 kubelet[2609]: I0711 00:25:58.555086 2609 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 11 00:25:58.555124 kubelet[2609]: I0711 00:25:58.555116 2609 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:25:58.555337 kubelet[2609]: I0711 00:25:58.555314 2609 server.go:934] "Client rotation is on, will bootstrap in background" Jul 11 00:25:58.556620 kubelet[2609]: I0711 00:25:58.556598 2609 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 11 00:25:58.559955 kubelet[2609]: I0711 00:25:58.559682 2609 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:25:58.562288 kubelet[2609]: E0711 00:25:58.562256 2609 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:25:58.562288 kubelet[2609]: I0711 00:25:58.562287 2609 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:25:58.565194 kubelet[2609]: I0711 00:25:58.565173 2609 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:25:58.565533 kubelet[2609]: I0711 00:25:58.565522 2609 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 11 00:25:58.565640 kubelet[2609]: I0711 00:25:58.565615 2609 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:25:58.565818 kubelet[2609]: I0711 00:25:58.565643 2609 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 11 00:25:58.565911 kubelet[2609]: I0711 00:25:58.565838 2609 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:25:58.565911 kubelet[2609]: I0711 00:25:58.565848 2609 container_manager_linux.go:300] "Creating device plugin manager" Jul 11 00:25:58.565911 kubelet[2609]: I0711 00:25:58.565884 2609 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:25:58.565996 kubelet[2609]: I0711 00:25:58.565986 2609 kubelet.go:408] "Attempting to sync node with API server" Jul 11 00:25:58.566027 kubelet[2609]: I0711 00:25:58.566019 2609 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:25:58.566053 kubelet[2609]: I0711 00:25:58.566040 2609 kubelet.go:314] "Adding apiserver pod source" Jul 11 00:25:58.566080 kubelet[2609]: I0711 00:25:58.566056 2609 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:25:58.566674 kubelet[2609]: I0711 00:25:58.566650 2609 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 11 00:25:58.567762 kubelet[2609]: I0711 00:25:58.567729 2609 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:25:58.568174 kubelet[2609]: I0711 00:25:58.568155 2609 server.go:1274] "Started kubelet" Jul 11 00:25:58.569549 kubelet[2609]: I0711 00:25:58.568703 2609 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:25:58.569549 kubelet[2609]: I0711 00:25:58.569403 2609 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:25:58.569549 kubelet[2609]: I0711 00:25:58.569514 2609 server.go:449] "Adding debug handlers to kubelet server" Jul 11 00:25:58.569960 kubelet[2609]: I0711 00:25:58.569898 2609 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:25:58.570171 kubelet[2609]: I0711 00:25:58.570143 2609 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:25:58.570412 kubelet[2609]: I0711 00:25:58.570371 2609 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 11 00:25:58.570494 kubelet[2609]: I0711 00:25:58.570477 2609 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 11 00:25:58.570598 kubelet[2609]: I0711 00:25:58.570584 2609 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:25:58.576836 kubelet[2609]: E0711 00:25:58.571427 2609 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:25:58.576836 kubelet[2609]: I0711 00:25:58.576074 2609 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:25:58.576836 kubelet[2609]: I0711 00:25:58.576652 2609 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:25:58.576836 kubelet[2609]: I0711 00:25:58.576741 2609 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:25:58.588918 kubelet[2609]: I0711 00:25:58.588889 2609 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:25:58.598341 kubelet[2609]: I0711 00:25:58.598288 2609 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:25:58.598934 kubelet[2609]: E0711 00:25:58.598911 2609 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:25:58.599678 kubelet[2609]: I0711 00:25:58.599641 2609 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:25:58.599678 kubelet[2609]: I0711 00:25:58.599677 2609 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 11 00:25:58.599778 kubelet[2609]: I0711 00:25:58.599696 2609 kubelet.go:2321] "Starting kubelet main sync loop" Jul 11 00:25:58.599778 kubelet[2609]: E0711 00:25:58.599742 2609 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:25:58.629743 kubelet[2609]: I0711 00:25:58.629717 2609 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 11 00:25:58.629743 kubelet[2609]: I0711 00:25:58.629735 2609 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 11 00:25:58.629923 kubelet[2609]: I0711 00:25:58.629757 2609 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:25:58.629946 kubelet[2609]: I0711 00:25:58.629934 2609 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 11 00:25:58.629969 kubelet[2609]: I0711 00:25:58.629946 2609 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 11 00:25:58.629969 kubelet[2609]: I0711 00:25:58.629964 2609 policy_none.go:49] "None policy: Start" Jul 11 00:25:58.630563 kubelet[2609]: I0711 00:25:58.630546 2609 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 11 00:25:58.630616 kubelet[2609]: I0711 00:25:58.630571 2609 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:25:58.630727 kubelet[2609]: I0711 00:25:58.630713 2609 state_mem.go:75] "Updated machine memory state" Jul 11 00:25:58.631911 kubelet[2609]: I0711 00:25:58.631887 2609 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:25:58.632536 kubelet[2609]: I0711 00:25:58.632047 2609 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:25:58.632536 kubelet[2609]: I0711 00:25:58.632064 2609 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:25:58.632536 kubelet[2609]: I0711 00:25:58.632248 2609 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:25:58.708917 kubelet[2609]: E0711 00:25:58.708662 2609 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 11 00:25:58.736328 kubelet[2609]: I0711 00:25:58.736296 2609 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:25:58.742712 kubelet[2609]: I0711 00:25:58.742687 2609 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 11 00:25:58.742855 kubelet[2609]: I0711 00:25:58.742763 2609 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 11 00:25:58.772344 kubelet[2609]: I0711 00:25:58.772304 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:25:58.772344 kubelet[2609]: I0711 00:25:58.772344 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:25:58.772497 kubelet[2609]: I0711 00:25:58.772364 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/072ea9e701e8506f22ac8d7264d6990d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"072ea9e701e8506f22ac8d7264d6990d\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:25:58.772497 kubelet[2609]: I0711 00:25:58.772383 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/072ea9e701e8506f22ac8d7264d6990d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"072ea9e701e8506f22ac8d7264d6990d\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:25:58.772497 kubelet[2609]: I0711 00:25:58.772406 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:25:58.772497 kubelet[2609]: I0711 00:25:58.772421 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:25:58.772497 kubelet[2609]: I0711 00:25:58.772437 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:25:58.772598 kubelet[2609]: I0711 00:25:58.772451 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/072ea9e701e8506f22ac8d7264d6990d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"072ea9e701e8506f22ac8d7264d6990d\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:25:58.772598 kubelet[2609]: I0711 00:25:58.772466 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:25:59.010133 kubelet[2609]: E0711 00:25:59.009955 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:59.010133 kubelet[2609]: E0711 00:25:59.010009 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:59.010133 kubelet[2609]: E0711 00:25:59.009969 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:59.567129 kubelet[2609]: I0711 00:25:59.567089 2609 apiserver.go:52] "Watching apiserver" Jul 11 00:25:59.578923 kubelet[2609]: I0711 00:25:59.570867 2609 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 11 00:25:59.610885 kubelet[2609]: E0711 00:25:59.610342 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:59.610885 kubelet[2609]: E0711 00:25:59.610745 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:59.616723 kubelet[2609]: E0711 00:25:59.616528 2609 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 11 00:25:59.616723 kubelet[2609]: E0711 00:25:59.616667 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:25:59.637328 kubelet[2609]: I0711 00:25:59.637242 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.637226103 podStartE2EDuration="2.637226103s" podCreationTimestamp="2025-07-11 00:25:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:25:59.629065703 +0000 UTC m=+1.118623321" watchObservedRunningTime="2025-07-11 00:25:59.637226103 +0000 UTC m=+1.126783721" Jul 11 00:25:59.659225 kubelet[2609]: I0711 00:25:59.659163 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.659145503 podStartE2EDuration="1.659145503s" podCreationTimestamp="2025-07-11 00:25:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:25:59.637638743 +0000 UTC m=+1.127196361" watchObservedRunningTime="2025-07-11 00:25:59.659145503 +0000 UTC m=+1.148703121" Jul 11 00:25:59.673344 kubelet[2609]: I0711 00:25:59.673262 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.673243783 podStartE2EDuration="1.673243783s" podCreationTimestamp="2025-07-11 00:25:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:25:59.659410863 +0000 UTC m=+1.148968481" watchObservedRunningTime="2025-07-11 00:25:59.673243783 +0000 UTC m=+1.162801361" Jul 11 00:26:00.611674 kubelet[2609]: E0711 00:26:00.611637 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:04.391071 kubelet[2609]: E0711 00:26:04.390969 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:04.616771 kubelet[2609]: E0711 00:26:04.616736 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:05.822433 kubelet[2609]: I0711 00:26:05.822402 2609 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 11 00:26:05.823339 containerd[1542]: time="2025-07-11T00:26:05.823245907Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 11 00:26:05.823620 kubelet[2609]: I0711 00:26:05.823418 2609 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 11 00:26:06.929544 kubelet[2609]: I0711 00:26:06.929496 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47167b0e-2c67-4ef0-846d-3b341a68baba-lib-modules\") pod \"kube-proxy-855hv\" (UID: \"47167b0e-2c67-4ef0-846d-3b341a68baba\") " pod="kube-system/kube-proxy-855hv" Jul 11 00:26:06.929544 kubelet[2609]: I0711 00:26:06.929544 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62lmd\" (UniqueName: \"kubernetes.io/projected/47167b0e-2c67-4ef0-846d-3b341a68baba-kube-api-access-62lmd\") pod \"kube-proxy-855hv\" (UID: \"47167b0e-2c67-4ef0-846d-3b341a68baba\") " pod="kube-system/kube-proxy-855hv" Jul 11 00:26:06.930049 kubelet[2609]: I0711 00:26:06.929573 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/47167b0e-2c67-4ef0-846d-3b341a68baba-kube-proxy\") pod \"kube-proxy-855hv\" (UID: \"47167b0e-2c67-4ef0-846d-3b341a68baba\") " pod="kube-system/kube-proxy-855hv" Jul 11 00:26:06.930049 kubelet[2609]: I0711 00:26:06.929588 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47167b0e-2c67-4ef0-846d-3b341a68baba-xtables-lock\") pod \"kube-proxy-855hv\" (UID: \"47167b0e-2c67-4ef0-846d-3b341a68baba\") " pod="kube-system/kube-proxy-855hv" Jul 11 00:26:07.030817 kubelet[2609]: I0711 00:26:07.030402 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kscdl\" (UniqueName: \"kubernetes.io/projected/b5e09197-92f7-440c-837e-cb6942829d9a-kube-api-access-kscdl\") pod \"tigera-operator-5bf8dfcb4-xhzlt\" (UID: \"b5e09197-92f7-440c-837e-cb6942829d9a\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-xhzlt" Jul 11 00:26:07.030817 kubelet[2609]: I0711 00:26:07.030452 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b5e09197-92f7-440c-837e-cb6942829d9a-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-xhzlt\" (UID: \"b5e09197-92f7-440c-837e-cb6942829d9a\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-xhzlt" Jul 11 00:26:07.177639 kubelet[2609]: E0711 00:26:07.177592 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:07.178504 containerd[1542]: time="2025-07-11T00:26:07.178153435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-855hv,Uid:47167b0e-2c67-4ef0-846d-3b341a68baba,Namespace:kube-system,Attempt:0,}" Jul 11 00:26:07.196400 containerd[1542]: time="2025-07-11T00:26:07.196298584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:26:07.196400 containerd[1542]: time="2025-07-11T00:26:07.196358543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:26:07.196400 containerd[1542]: time="2025-07-11T00:26:07.196369903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:07.196639 containerd[1542]: time="2025-07-11T00:26:07.196453222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:07.226190 containerd[1542]: time="2025-07-11T00:26:07.226149892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-855hv,Uid:47167b0e-2c67-4ef0-846d-3b341a68baba,Namespace:kube-system,Attempt:0,} returns sandbox id \"d73956e16ca77ce5f08c5e29a7f00deeb526992ca043d5dc6f9911ee5751515f\"" Jul 11 00:26:07.227315 kubelet[2609]: E0711 00:26:07.227088 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:07.230347 containerd[1542]: time="2025-07-11T00:26:07.230292634Z" level=info msg="CreateContainer within sandbox \"d73956e16ca77ce5f08c5e29a7f00deeb526992ca043d5dc6f9911ee5751515f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 11 00:26:07.241115 containerd[1542]: time="2025-07-11T00:26:07.241059005Z" level=info msg="CreateContainer within sandbox \"d73956e16ca77ce5f08c5e29a7f00deeb526992ca043d5dc6f9911ee5751515f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1b5e2ac0b20e202c47c68aea54d6628970e436d898779d71555ce484e332f8ab\"" Jul 11 00:26:07.242346 containerd[1542]: time="2025-07-11T00:26:07.242243069Z" level=info msg="StartContainer for \"1b5e2ac0b20e202c47c68aea54d6628970e436d898779d71555ce484e332f8ab\"" Jul 11 00:26:07.289297 containerd[1542]: time="2025-07-11T00:26:07.289249779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-xhzlt,Uid:b5e09197-92f7-440c-837e-cb6942829d9a,Namespace:tigera-operator,Attempt:0,}" Jul 11 00:26:07.296873 containerd[1542]: time="2025-07-11T00:26:07.295039859Z" level=info msg="StartContainer for \"1b5e2ac0b20e202c47c68aea54d6628970e436d898779d71555ce484e332f8ab\" returns successfully" Jul 11 00:26:07.310718 containerd[1542]: time="2025-07-11T00:26:07.310493485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:26:07.310718 containerd[1542]: time="2025-07-11T00:26:07.310552725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:26:07.310718 containerd[1542]: time="2025-07-11T00:26:07.310568324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:07.310718 containerd[1542]: time="2025-07-11T00:26:07.310668963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:07.352466 containerd[1542]: time="2025-07-11T00:26:07.352371387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-xhzlt,Uid:b5e09197-92f7-440c-837e-cb6942829d9a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e791ec4b6923897751b2c59e12a02b12d09c58eca33371bd7789877fe6877f12\"" Jul 11 00:26:07.355495 containerd[1542]: time="2025-07-11T00:26:07.355450344Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 11 00:26:07.570695 kubelet[2609]: E0711 00:26:07.570581 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:07.622932 kubelet[2609]: E0711 00:26:07.622900 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:07.622932 kubelet[2609]: E0711 00:26:07.622934 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:08.452294 kubelet[2609]: E0711 00:26:08.452242 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:08.465672 kubelet[2609]: I0711 00:26:08.465581 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-855hv" podStartSLOduration=2.465507918 podStartE2EDuration="2.465507918s" podCreationTimestamp="2025-07-11 00:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:26:07.639727934 +0000 UTC m=+9.129285552" watchObservedRunningTime="2025-07-11 00:26:08.465507918 +0000 UTC m=+9.955065536" Jul 11 00:26:08.625500 kubelet[2609]: E0711 00:26:08.625470 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:08.849811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2029958122.mount: Deactivated successfully. Jul 11 00:26:09.304530 containerd[1542]: time="2025-07-11T00:26:09.304466530Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:09.305135 containerd[1542]: time="2025-07-11T00:26:09.305100043Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 11 00:26:09.305770 containerd[1542]: time="2025-07-11T00:26:09.305714035Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:09.308099 containerd[1542]: time="2025-07-11T00:26:09.308059367Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:09.309520 containerd[1542]: time="2025-07-11T00:26:09.309446310Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.953953286s" Jul 11 00:26:09.309520 containerd[1542]: time="2025-07-11T00:26:09.309480229Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 11 00:26:09.311784 containerd[1542]: time="2025-07-11T00:26:09.311750922Z" level=info msg="CreateContainer within sandbox \"e791ec4b6923897751b2c59e12a02b12d09c58eca33371bd7789877fe6877f12\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 11 00:26:09.321077 containerd[1542]: time="2025-07-11T00:26:09.320914490Z" level=info msg="CreateContainer within sandbox \"e791ec4b6923897751b2c59e12a02b12d09c58eca33371bd7789877fe6877f12\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f8b438f17634850253001f864d9db7c88ec17e461c487dff29227c101a07afd5\"" Jul 11 00:26:09.321514 containerd[1542]: time="2025-07-11T00:26:09.321469444Z" level=info msg="StartContainer for \"f8b438f17634850253001f864d9db7c88ec17e461c487dff29227c101a07afd5\"" Jul 11 00:26:09.374035 containerd[1542]: time="2025-07-11T00:26:09.373983726Z" level=info msg="StartContainer for \"f8b438f17634850253001f864d9db7c88ec17e461c487dff29227c101a07afd5\" returns successfully" Jul 11 00:26:14.712280 sudo[1748]: pam_unix(sudo:session): session closed for user root Jul 11 00:26:14.720278 sshd[1741]: pam_unix(sshd:session): session closed for user core Jul 11 00:26:14.724436 systemd[1]: sshd@6-10.0.0.118:22-10.0.0.1:44122.service: Deactivated successfully. Jul 11 00:26:14.727733 systemd[1]: session-7.scope: Deactivated successfully. Jul 11 00:26:14.727852 systemd-logind[1524]: Session 7 logged out. Waiting for processes to exit. Jul 11 00:26:14.729651 systemd-logind[1524]: Removed session 7. Jul 11 00:26:16.538947 update_engine[1527]: I20250711 00:26:16.538864 1527 update_attempter.cc:509] Updating boot flags... Jul 11 00:26:16.575007 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3022) Jul 11 00:26:19.602273 kubelet[2609]: I0711 00:26:19.602199 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-xhzlt" podStartSLOduration=11.6456878 podStartE2EDuration="13.602180893s" podCreationTimestamp="2025-07-11 00:26:06 +0000 UTC" firstStartedPulling="2025-07-11 00:26:07.353631289 +0000 UTC m=+8.843188867" lastFinishedPulling="2025-07-11 00:26:09.310124342 +0000 UTC m=+10.799681960" observedRunningTime="2025-07-11 00:26:09.635742265 +0000 UTC m=+11.125299883" watchObservedRunningTime="2025-07-11 00:26:19.602180893 +0000 UTC m=+21.091738511" Jul 11 00:26:19.619148 kubelet[2609]: I0711 00:26:19.619098 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba6d0669-8985-48a5-8263-382be4f38c3d-tigera-ca-bundle\") pod \"calico-typha-5bc8b7d464-cpjfv\" (UID: \"ba6d0669-8985-48a5-8263-382be4f38c3d\") " pod="calico-system/calico-typha-5bc8b7d464-cpjfv" Jul 11 00:26:19.619148 kubelet[2609]: I0711 00:26:19.619142 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ba6d0669-8985-48a5-8263-382be4f38c3d-typha-certs\") pod \"calico-typha-5bc8b7d464-cpjfv\" (UID: \"ba6d0669-8985-48a5-8263-382be4f38c3d\") " pod="calico-system/calico-typha-5bc8b7d464-cpjfv" Jul 11 00:26:19.619333 kubelet[2609]: I0711 00:26:19.619162 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h58tm\" (UniqueName: \"kubernetes.io/projected/ba6d0669-8985-48a5-8263-382be4f38c3d-kube-api-access-h58tm\") pod \"calico-typha-5bc8b7d464-cpjfv\" (UID: \"ba6d0669-8985-48a5-8263-382be4f38c3d\") " pod="calico-system/calico-typha-5bc8b7d464-cpjfv" Jul 11 00:26:19.906097 kubelet[2609]: E0711 00:26:19.905995 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:19.907077 containerd[1542]: time="2025-07-11T00:26:19.906736353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bc8b7d464-cpjfv,Uid:ba6d0669-8985-48a5-8263-382be4f38c3d,Namespace:calico-system,Attempt:0,}" Jul 11 00:26:19.921890 kubelet[2609]: I0711 00:26:19.921442 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/304fcedd-ad93-457c-942d-dc79f7a44483-cni-log-dir\") pod \"calico-node-mk4zm\" (UID: \"304fcedd-ad93-457c-942d-dc79f7a44483\") " pod="calico-system/calico-node-mk4zm" Jul 11 00:26:19.923410 kubelet[2609]: I0711 00:26:19.923223 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/304fcedd-ad93-457c-942d-dc79f7a44483-var-run-calico\") pod \"calico-node-mk4zm\" (UID: \"304fcedd-ad93-457c-942d-dc79f7a44483\") " pod="calico-system/calico-node-mk4zm" Jul 11 00:26:19.923410 kubelet[2609]: I0711 00:26:19.923266 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/304fcedd-ad93-457c-942d-dc79f7a44483-cni-net-dir\") pod \"calico-node-mk4zm\" (UID: \"304fcedd-ad93-457c-942d-dc79f7a44483\") " pod="calico-system/calico-node-mk4zm" Jul 11 00:26:19.923410 kubelet[2609]: I0711 00:26:19.923283 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/304fcedd-ad93-457c-942d-dc79f7a44483-node-certs\") pod \"calico-node-mk4zm\" (UID: \"304fcedd-ad93-457c-942d-dc79f7a44483\") " pod="calico-system/calico-node-mk4zm" Jul 11 00:26:19.923410 kubelet[2609]: I0711 00:26:19.923298 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/304fcedd-ad93-457c-942d-dc79f7a44483-policysync\") pod \"calico-node-mk4zm\" (UID: \"304fcedd-ad93-457c-942d-dc79f7a44483\") " pod="calico-system/calico-node-mk4zm" Jul 11 00:26:19.923410 kubelet[2609]: I0711 00:26:19.923313 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/304fcedd-ad93-457c-942d-dc79f7a44483-tigera-ca-bundle\") pod \"calico-node-mk4zm\" (UID: \"304fcedd-ad93-457c-942d-dc79f7a44483\") " pod="calico-system/calico-node-mk4zm" Jul 11 00:26:19.923583 kubelet[2609]: I0711 00:26:19.923331 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/304fcedd-ad93-457c-942d-dc79f7a44483-cni-bin-dir\") pod \"calico-node-mk4zm\" (UID: \"304fcedd-ad93-457c-942d-dc79f7a44483\") " pod="calico-system/calico-node-mk4zm" Jul 11 00:26:19.923583 kubelet[2609]: I0711 00:26:19.923349 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/304fcedd-ad93-457c-942d-dc79f7a44483-flexvol-driver-host\") pod \"calico-node-mk4zm\" (UID: \"304fcedd-ad93-457c-942d-dc79f7a44483\") " pod="calico-system/calico-node-mk4zm" Jul 11 00:26:19.923583 kubelet[2609]: I0711 00:26:19.923370 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/304fcedd-ad93-457c-942d-dc79f7a44483-var-lib-calico\") pod \"calico-node-mk4zm\" (UID: \"304fcedd-ad93-457c-942d-dc79f7a44483\") " pod="calico-system/calico-node-mk4zm" Jul 11 00:26:19.923583 kubelet[2609]: I0711 00:26:19.923424 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/304fcedd-ad93-457c-942d-dc79f7a44483-lib-modules\") pod \"calico-node-mk4zm\" (UID: \"304fcedd-ad93-457c-942d-dc79f7a44483\") " pod="calico-system/calico-node-mk4zm" Jul 11 00:26:19.923583 kubelet[2609]: I0711 00:26:19.923465 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/304fcedd-ad93-457c-942d-dc79f7a44483-xtables-lock\") pod \"calico-node-mk4zm\" (UID: \"304fcedd-ad93-457c-942d-dc79f7a44483\") " pod="calico-system/calico-node-mk4zm" Jul 11 00:26:19.923690 kubelet[2609]: I0711 00:26:19.923486 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfxpw\" (UniqueName: \"kubernetes.io/projected/304fcedd-ad93-457c-942d-dc79f7a44483-kube-api-access-jfxpw\") pod \"calico-node-mk4zm\" (UID: \"304fcedd-ad93-457c-942d-dc79f7a44483\") " pod="calico-system/calico-node-mk4zm" Jul 11 00:26:19.960732 containerd[1542]: time="2025-07-11T00:26:19.958268864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:26:19.960900 containerd[1542]: time="2025-07-11T00:26:19.959004419Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:26:19.960900 containerd[1542]: time="2025-07-11T00:26:19.959033619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:19.960900 containerd[1542]: time="2025-07-11T00:26:19.959139339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:20.016386 containerd[1542]: time="2025-07-11T00:26:20.016352420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bc8b7d464-cpjfv,Uid:ba6d0669-8985-48a5-8263-382be4f38c3d,Namespace:calico-system,Attempt:0,} returns sandbox id \"8092105f0c1489da27086d46e31a1d8ef2d04a6b4d2944e649bf84b27f3a5406\"" Jul 11 00:26:20.017171 kubelet[2609]: E0711 00:26:20.017149 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:20.018956 containerd[1542]: time="2025-07-11T00:26:20.018141250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 11 00:26:20.029251 kubelet[2609]: E0711 00:26:20.029230 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.029402 kubelet[2609]: W0711 00:26:20.029386 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.029980 kubelet[2609]: E0711 00:26:20.029866 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.030145 kubelet[2609]: E0711 00:26:20.030132 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.030237 kubelet[2609]: W0711 00:26:20.030196 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.030237 kubelet[2609]: E0711 00:26:20.030214 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.034100 kubelet[2609]: E0711 00:26:20.034037 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.034100 kubelet[2609]: W0711 00:26:20.034054 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.034100 kubelet[2609]: E0711 00:26:20.034068 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.179434 containerd[1542]: time="2025-07-11T00:26:20.179308007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mk4zm,Uid:304fcedd-ad93-457c-942d-dc79f7a44483,Namespace:calico-system,Attempt:0,}" Jul 11 00:26:20.256066 kubelet[2609]: E0711 00:26:20.255796 2609 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79v4x" podUID="6af0ab6f-2e3c-4302-a4e5-e8d8f53a43ab" Jul 11 00:26:20.281249 containerd[1542]: time="2025-07-11T00:26:20.281147838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:26:20.281249 containerd[1542]: time="2025-07-11T00:26:20.281214238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:26:20.281249 containerd[1542]: time="2025-07-11T00:26:20.281229718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:20.282363 containerd[1542]: time="2025-07-11T00:26:20.282161272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:20.314928 kubelet[2609]: E0711 00:26:20.314901 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.314928 kubelet[2609]: W0711 00:26:20.314923 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.315076 kubelet[2609]: E0711 00:26:20.314950 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.315146 kubelet[2609]: E0711 00:26:20.315112 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.315146 kubelet[2609]: W0711 00:26:20.315123 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.315146 kubelet[2609]: E0711 00:26:20.315132 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.315388 kubelet[2609]: E0711 00:26:20.315281 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.315388 kubelet[2609]: W0711 00:26:20.315289 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.315388 kubelet[2609]: E0711 00:26:20.315297 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.315467 kubelet[2609]: E0711 00:26:20.315448 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.315467 kubelet[2609]: W0711 00:26:20.315456 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.315467 kubelet[2609]: E0711 00:26:20.315464 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.315625 kubelet[2609]: E0711 00:26:20.315606 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.315625 kubelet[2609]: W0711 00:26:20.315622 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.315721 kubelet[2609]: E0711 00:26:20.315631 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.315772 kubelet[2609]: E0711 00:26:20.315759 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.315837 kubelet[2609]: W0711 00:26:20.315772 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.315837 kubelet[2609]: E0711 00:26:20.315781 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.315958 kubelet[2609]: E0711 00:26:20.315937 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.315958 kubelet[2609]: W0711 00:26:20.315952 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.316034 kubelet[2609]: E0711 00:26:20.315961 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.316111 kubelet[2609]: E0711 00:26:20.316100 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.316111 kubelet[2609]: W0711 00:26:20.316111 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.316189 kubelet[2609]: E0711 00:26:20.316119 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.316280 kubelet[2609]: E0711 00:26:20.316264 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.316280 kubelet[2609]: W0711 00:26:20.316279 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.316345 kubelet[2609]: E0711 00:26:20.316287 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.316440 kubelet[2609]: E0711 00:26:20.316428 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.316440 kubelet[2609]: W0711 00:26:20.316439 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.316500 kubelet[2609]: E0711 00:26:20.316448 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.316593 kubelet[2609]: E0711 00:26:20.316577 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.316593 kubelet[2609]: W0711 00:26:20.316596 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.316772 kubelet[2609]: E0711 00:26:20.316605 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.316772 kubelet[2609]: E0711 00:26:20.316735 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.316772 kubelet[2609]: W0711 00:26:20.316751 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.316870 kubelet[2609]: E0711 00:26:20.316778 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.317565 kubelet[2609]: E0711 00:26:20.316999 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.317565 kubelet[2609]: W0711 00:26:20.317012 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.317565 kubelet[2609]: E0711 00:26:20.317022 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.317565 kubelet[2609]: E0711 00:26:20.317154 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.317565 kubelet[2609]: W0711 00:26:20.317177 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.317565 kubelet[2609]: E0711 00:26:20.317187 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.317565 kubelet[2609]: E0711 00:26:20.317308 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.317565 kubelet[2609]: W0711 00:26:20.317315 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.317565 kubelet[2609]: E0711 00:26:20.317342 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.317565 kubelet[2609]: E0711 00:26:20.317509 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.317862 kubelet[2609]: W0711 00:26:20.317517 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.317862 kubelet[2609]: E0711 00:26:20.317525 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.318016 kubelet[2609]: E0711 00:26:20.318001 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.318016 kubelet[2609]: W0711 00:26:20.318014 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.318078 kubelet[2609]: E0711 00:26:20.318025 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.318337 kubelet[2609]: E0711 00:26:20.318279 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.318337 kubelet[2609]: W0711 00:26:20.318293 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.318337 kubelet[2609]: E0711 00:26:20.318324 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.318534 kubelet[2609]: E0711 00:26:20.318521 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.318534 kubelet[2609]: W0711 00:26:20.318533 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.318590 kubelet[2609]: E0711 00:26:20.318542 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.318724 kubelet[2609]: E0711 00:26:20.318712 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.318724 kubelet[2609]: W0711 00:26:20.318724 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.318783 kubelet[2609]: E0711 00:26:20.318733 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.324554 containerd[1542]: time="2025-07-11T00:26:20.324445539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mk4zm,Uid:304fcedd-ad93-457c-942d-dc79f7a44483,Namespace:calico-system,Attempt:0,} returns sandbox id \"547a6de939e4b8d113eee962551cfc12f16006c73c3945f8e5b4c863162fc25f\"" Jul 11 00:26:20.327090 kubelet[2609]: E0711 00:26:20.326948 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.327090 kubelet[2609]: W0711 00:26:20.326966 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.327090 kubelet[2609]: E0711 00:26:20.326979 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.327090 kubelet[2609]: I0711 00:26:20.327003 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6af0ab6f-2e3c-4302-a4e5-e8d8f53a43ab-kubelet-dir\") pod \"csi-node-driver-79v4x\" (UID: \"6af0ab6f-2e3c-4302-a4e5-e8d8f53a43ab\") " pod="calico-system/csi-node-driver-79v4x" Jul 11 00:26:20.327713 kubelet[2609]: E0711 00:26:20.327271 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.327713 kubelet[2609]: W0711 00:26:20.327287 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.327713 kubelet[2609]: E0711 00:26:20.327306 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.327713 kubelet[2609]: I0711 00:26:20.327321 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6af0ab6f-2e3c-4302-a4e5-e8d8f53a43ab-socket-dir\") pod \"csi-node-driver-79v4x\" (UID: \"6af0ab6f-2e3c-4302-a4e5-e8d8f53a43ab\") " pod="calico-system/csi-node-driver-79v4x" Jul 11 00:26:20.327713 kubelet[2609]: E0711 00:26:20.327529 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.327713 kubelet[2609]: W0711 00:26:20.327538 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.327713 kubelet[2609]: E0711 00:26:20.327555 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.327713 kubelet[2609]: I0711 00:26:20.327572 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h69lm\" (UniqueName: \"kubernetes.io/projected/6af0ab6f-2e3c-4302-a4e5-e8d8f53a43ab-kube-api-access-h69lm\") pod \"csi-node-driver-79v4x\" (UID: \"6af0ab6f-2e3c-4302-a4e5-e8d8f53a43ab\") " pod="calico-system/csi-node-driver-79v4x" Jul 11 00:26:20.327930 kubelet[2609]: E0711 00:26:20.327735 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.327930 kubelet[2609]: W0711 00:26:20.327745 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.327930 kubelet[2609]: E0711 00:26:20.327756 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.327930 kubelet[2609]: I0711 00:26:20.327770 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6af0ab6f-2e3c-4302-a4e5-e8d8f53a43ab-registration-dir\") pod \"csi-node-driver-79v4x\" (UID: \"6af0ab6f-2e3c-4302-a4e5-e8d8f53a43ab\") " pod="calico-system/csi-node-driver-79v4x" Jul 11 00:26:20.328012 kubelet[2609]: E0711 00:26:20.327944 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.328012 kubelet[2609]: W0711 00:26:20.327953 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.328012 kubelet[2609]: E0711 00:26:20.327966 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.328012 kubelet[2609]: I0711 00:26:20.327981 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6af0ab6f-2e3c-4302-a4e5-e8d8f53a43ab-varrun\") pod \"csi-node-driver-79v4x\" (UID: \"6af0ab6f-2e3c-4302-a4e5-e8d8f53a43ab\") " pod="calico-system/csi-node-driver-79v4x" Jul 11 00:26:20.329624 kubelet[2609]: E0711 00:26:20.329594 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.329624 kubelet[2609]: W0711 00:26:20.329610 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.329720 kubelet[2609]: E0711 00:26:20.329647 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.329955 kubelet[2609]: E0711 00:26:20.329847 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.329955 kubelet[2609]: W0711 00:26:20.329861 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.329955 kubelet[2609]: E0711 00:26:20.329902 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.330123 kubelet[2609]: E0711 00:26:20.330088 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.330123 kubelet[2609]: W0711 00:26:20.330101 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.330182 kubelet[2609]: E0711 00:26:20.330141 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.330350 kubelet[2609]: E0711 00:26:20.330322 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.330350 kubelet[2609]: W0711 00:26:20.330334 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.330467 kubelet[2609]: E0711 00:26:20.330438 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.330505 kubelet[2609]: E0711 00:26:20.330498 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.330570 kubelet[2609]: W0711 00:26:20.330506 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.330570 kubelet[2609]: E0711 00:26:20.330537 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.330673 kubelet[2609]: E0711 00:26:20.330658 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.330673 kubelet[2609]: W0711 00:26:20.330670 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.330752 kubelet[2609]: E0711 00:26:20.330737 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.330808 kubelet[2609]: E0711 00:26:20.330798 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.330808 kubelet[2609]: W0711 00:26:20.330808 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.330872 kubelet[2609]: E0711 00:26:20.330816 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.331032 kubelet[2609]: E0711 00:26:20.331003 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.331032 kubelet[2609]: W0711 00:26:20.331016 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.331032 kubelet[2609]: E0711 00:26:20.331025 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.331192 kubelet[2609]: E0711 00:26:20.331179 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.331192 kubelet[2609]: W0711 00:26:20.331190 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.331240 kubelet[2609]: E0711 00:26:20.331199 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.331461 kubelet[2609]: E0711 00:26:20.331447 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.331461 kubelet[2609]: W0711 00:26:20.331459 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.331529 kubelet[2609]: E0711 00:26:20.331469 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.428435 kubelet[2609]: E0711 00:26:20.428389 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.428435 kubelet[2609]: W0711 00:26:20.428421 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.428435 kubelet[2609]: E0711 00:26:20.428442 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.428703 kubelet[2609]: E0711 00:26:20.428671 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.428703 kubelet[2609]: W0711 00:26:20.428687 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.428703 kubelet[2609]: E0711 00:26:20.428701 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.428913 kubelet[2609]: E0711 00:26:20.428895 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.428913 kubelet[2609]: W0711 00:26:20.428909 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.428976 kubelet[2609]: E0711 00:26:20.428922 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.429298 kubelet[2609]: E0711 00:26:20.429270 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.429298 kubelet[2609]: W0711 00:26:20.429289 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.429346 kubelet[2609]: E0711 00:26:20.429308 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.429637 kubelet[2609]: E0711 00:26:20.429573 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.429637 kubelet[2609]: W0711 00:26:20.429587 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.429637 kubelet[2609]: E0711 00:26:20.429604 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.429918 kubelet[2609]: E0711 00:26:20.429816 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.429918 kubelet[2609]: W0711 00:26:20.429838 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.429918 kubelet[2609]: E0711 00:26:20.429881 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.430034 kubelet[2609]: E0711 00:26:20.430013 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.430034 kubelet[2609]: W0711 00:26:20.430024 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.430105 kubelet[2609]: E0711 00:26:20.430090 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.430178 kubelet[2609]: E0711 00:26:20.430164 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.430178 kubelet[2609]: W0711 00:26:20.430174 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.430250 kubelet[2609]: E0711 00:26:20.430239 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.430359 kubelet[2609]: E0711 00:26:20.430310 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.430359 kubelet[2609]: W0711 00:26:20.430320 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.430359 kubelet[2609]: E0711 00:26:20.430330 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.430501 kubelet[2609]: E0711 00:26:20.430469 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.430501 kubelet[2609]: W0711 00:26:20.430480 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.430501 kubelet[2609]: E0711 00:26:20.430488 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.430623 kubelet[2609]: E0711 00:26:20.430609 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.430623 kubelet[2609]: W0711 00:26:20.430620 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.430719 kubelet[2609]: E0711 00:26:20.430683 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.430807 kubelet[2609]: E0711 00:26:20.430794 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.430807 kubelet[2609]: W0711 00:26:20.430807 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.430891 kubelet[2609]: E0711 00:26:20.430843 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.431249 kubelet[2609]: E0711 00:26:20.431233 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.431249 kubelet[2609]: W0711 00:26:20.431246 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.431326 kubelet[2609]: E0711 00:26:20.431259 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.431500 kubelet[2609]: E0711 00:26:20.431430 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.431500 kubelet[2609]: W0711 00:26:20.431442 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.433070 kubelet[2609]: E0711 00:26:20.432963 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.433445 kubelet[2609]: E0711 00:26:20.433335 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.433445 kubelet[2609]: W0711 00:26:20.433349 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.433445 kubelet[2609]: E0711 00:26:20.433383 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.433805 kubelet[2609]: E0711 00:26:20.433734 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.433805 kubelet[2609]: W0711 00:26:20.433747 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.433805 kubelet[2609]: E0711 00:26:20.433776 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.434192 kubelet[2609]: E0711 00:26:20.434180 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.434464 kubelet[2609]: W0711 00:26:20.434391 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.434464 kubelet[2609]: E0711 00:26:20.434439 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.434997 kubelet[2609]: E0711 00:26:20.434981 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.435284 kubelet[2609]: W0711 00:26:20.435078 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.435284 kubelet[2609]: E0711 00:26:20.435115 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.435604 kubelet[2609]: E0711 00:26:20.435453 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.435604 kubelet[2609]: W0711 00:26:20.435480 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.435604 kubelet[2609]: E0711 00:26:20.435510 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.436096 kubelet[2609]: E0711 00:26:20.436075 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.436096 kubelet[2609]: W0711 00:26:20.436126 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.436096 kubelet[2609]: E0711 00:26:20.436157 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.436567 kubelet[2609]: E0711 00:26:20.436476 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.436567 kubelet[2609]: W0711 00:26:20.436489 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.436567 kubelet[2609]: E0711 00:26:20.436518 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.436815 kubelet[2609]: E0711 00:26:20.436726 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.436815 kubelet[2609]: W0711 00:26:20.436737 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.436916 kubelet[2609]: E0711 00:26:20.436842 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.437202 kubelet[2609]: E0711 00:26:20.437104 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.437202 kubelet[2609]: W0711 00:26:20.437115 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.437202 kubelet[2609]: E0711 00:26:20.437134 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.437511 kubelet[2609]: E0711 00:26:20.437497 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.437650 kubelet[2609]: W0711 00:26:20.437562 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.437650 kubelet[2609]: E0711 00:26:20.437585 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.437894 kubelet[2609]: E0711 00:26:20.437883 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.437956 kubelet[2609]: W0711 00:26:20.437946 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.438007 kubelet[2609]: E0711 00:26:20.437998 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.447627 kubelet[2609]: E0711 00:26:20.447572 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:20.447627 kubelet[2609]: W0711 00:26:20.447588 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:20.447627 kubelet[2609]: E0711 00:26:20.447601 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:20.964367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount851153520.mount: Deactivated successfully. Jul 11 00:26:21.600113 kubelet[2609]: E0711 00:26:21.600068 2609 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79v4x" podUID="6af0ab6f-2e3c-4302-a4e5-e8d8f53a43ab" Jul 11 00:26:21.986903 containerd[1542]: time="2025-07-11T00:26:21.986208539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:21.987393 containerd[1542]: time="2025-07-11T00:26:21.987360773Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 11 00:26:21.995454 containerd[1542]: time="2025-07-11T00:26:21.995406608Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.977205319s" Jul 11 00:26:21.995454 containerd[1542]: time="2025-07-11T00:26:21.995445608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 11 00:26:21.999843 containerd[1542]: time="2025-07-11T00:26:21.999736223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 11 00:26:22.015725 containerd[1542]: time="2025-07-11T00:26:22.015687219Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:22.017106 containerd[1542]: time="2025-07-11T00:26:22.016497895Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:22.029108 containerd[1542]: time="2025-07-11T00:26:22.029071149Z" level=info msg="CreateContainer within sandbox \"8092105f0c1489da27086d46e31a1d8ef2d04a6b4d2944e649bf84b27f3a5406\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 11 00:26:22.039270 containerd[1542]: time="2025-07-11T00:26:22.039229096Z" level=info msg="CreateContainer within sandbox \"8092105f0c1489da27086d46e31a1d8ef2d04a6b4d2944e649bf84b27f3a5406\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b8f78639ff41eca7ff9ab815173951ede256652f6854b0254b5a908ef20b9cad\"" Jul 11 00:26:22.039892 containerd[1542]: time="2025-07-11T00:26:22.039857612Z" level=info msg="StartContainer for \"b8f78639ff41eca7ff9ab815173951ede256652f6854b0254b5a908ef20b9cad\"" Jul 11 00:26:22.207682 containerd[1542]: time="2025-07-11T00:26:22.207015295Z" level=info msg="StartContainer for \"b8f78639ff41eca7ff9ab815173951ede256652f6854b0254b5a908ef20b9cad\" returns successfully" Jul 11 00:26:22.683460 kubelet[2609]: E0711 00:26:22.683144 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:22.695599 kubelet[2609]: I0711 00:26:22.695531 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5bc8b7d464-cpjfv" podStartSLOduration=1.714158635 podStartE2EDuration="3.69551545s" podCreationTimestamp="2025-07-11 00:26:19 +0000 UTC" firstStartedPulling="2025-07-11 00:26:20.017904331 +0000 UTC m=+21.507461949" lastFinishedPulling="2025-07-11 00:26:21.999261146 +0000 UTC m=+23.488818764" observedRunningTime="2025-07-11 00:26:22.69536337 +0000 UTC m=+24.184920988" watchObservedRunningTime="2025-07-11 00:26:22.69551545 +0000 UTC m=+24.185073028" Jul 11 00:26:22.739438 kubelet[2609]: E0711 00:26:22.739310 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.739438 kubelet[2609]: W0711 00:26:22.739329 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.739438 kubelet[2609]: E0711 00:26:22.739347 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:22.739689 kubelet[2609]: E0711 00:26:22.739676 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.739747 kubelet[2609]: W0711 00:26:22.739736 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.739813 kubelet[2609]: E0711 00:26:22.739802 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:22.740149 kubelet[2609]: E0711 00:26:22.740052 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.740149 kubelet[2609]: W0711 00:26:22.740065 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.740149 kubelet[2609]: E0711 00:26:22.740076 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:22.740307 kubelet[2609]: E0711 00:26:22.740293 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.740360 kubelet[2609]: W0711 00:26:22.740350 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.740432 kubelet[2609]: E0711 00:26:22.740420 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:22.740686 kubelet[2609]: E0711 00:26:22.740672 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.740803 kubelet[2609]: W0711 00:26:22.740751 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.740803 kubelet[2609]: E0711 00:26:22.740768 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:22.741742 kubelet[2609]: E0711 00:26:22.741305 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.741742 kubelet[2609]: W0711 00:26:22.741321 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.741742 kubelet[2609]: E0711 00:26:22.741333 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:22.741961 kubelet[2609]: E0711 00:26:22.741948 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.742018 kubelet[2609]: W0711 00:26:22.742007 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.742073 kubelet[2609]: E0711 00:26:22.742062 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:22.742366 kubelet[2609]: E0711 00:26:22.742352 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.742506 kubelet[2609]: W0711 00:26:22.742441 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.742506 kubelet[2609]: E0711 00:26:22.742458 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:22.742735 kubelet[2609]: E0711 00:26:22.742724 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.742889 kubelet[2609]: W0711 00:26:22.742787 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.742889 kubelet[2609]: E0711 00:26:22.742802 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:22.743167 kubelet[2609]: E0711 00:26:22.743143 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.743237 kubelet[2609]: W0711 00:26:22.743226 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.743808 kubelet[2609]: E0711 00:26:22.743298 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:22.744129 kubelet[2609]: E0711 00:26:22.744107 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.744311 kubelet[2609]: W0711 00:26:22.744207 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.744311 kubelet[2609]: E0711 00:26:22.744223 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:22.744454 kubelet[2609]: E0711 00:26:22.744441 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.744515 kubelet[2609]: W0711 00:26:22.744504 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.744571 kubelet[2609]: E0711 00:26:22.744561 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:22.744797 kubelet[2609]: E0711 00:26:22.744785 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.744909 kubelet[2609]: W0711 00:26:22.744895 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.744960 kubelet[2609]: E0711 00:26:22.744950 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:22.745171 kubelet[2609]: E0711 00:26:22.745159 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.745246 kubelet[2609]: W0711 00:26:22.745234 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.745299 kubelet[2609]: E0711 00:26:22.745290 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:22.745543 kubelet[2609]: E0711 00:26:22.745530 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.745787 kubelet[2609]: W0711 00:26:22.745604 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.745787 kubelet[2609]: E0711 00:26:22.745621 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:22.745991 kubelet[2609]: E0711 00:26:22.745976 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.746055 kubelet[2609]: W0711 00:26:22.746043 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.746109 kubelet[2609]: E0711 00:26:22.746100 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:22.746334 kubelet[2609]: E0711 00:26:22.746323 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.746417 kubelet[2609]: W0711 00:26:22.746405 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.746469 kubelet[2609]: E0711 00:26:22.746458 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:22.746718 kubelet[2609]: E0711 00:26:22.746705 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.746796 kubelet[2609]: W0711 00:26:22.746783 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.746900 kubelet[2609]: E0711 00:26:22.746888 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:22.747229 kubelet[2609]: E0711 00:26:22.747148 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.747229 kubelet[2609]: W0711 00:26:22.747160 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.747229 kubelet[2609]: E0711 00:26:22.747175 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:22.747579 kubelet[2609]: E0711 00:26:22.747508 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.747579 kubelet[2609]: W0711 00:26:22.747521 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.747579 kubelet[2609]: E0711 00:26:22.747571 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:22.748217 kubelet[2609]: E0711 00:26:22.748118 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.748217 kubelet[2609]: W0711 00:26:22.748136 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.748217 kubelet[2609]: E0711 00:26:22.748171 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:22.748404 kubelet[2609]: E0711 00:26:22.748321 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.748404 kubelet[2609]: W0711 00:26:22.748330 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.748581 kubelet[2609]: E0711 00:26:22.748514 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:22.748716 kubelet[2609]: E0711 00:26:22.748705 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.748862 kubelet[2609]: W0711 00:26:22.748759 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.748862 kubelet[2609]: E0711 00:26:22.748782 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:22.749098 kubelet[2609]: E0711 00:26:22.749086 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.749266 kubelet[2609]: W0711 00:26:22.749159 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.749266 kubelet[2609]: E0711 00:26:22.749184 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:22.749477 kubelet[2609]: E0711 00:26:22.749437 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.749519 kubelet[2609]: W0711 00:26:22.749479 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.749519 kubelet[2609]: E0711 00:26:22.749500 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:22.750292 kubelet[2609]: E0711 00:26:22.750275 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.750292 kubelet[2609]: W0711 00:26:22.750292 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.750384 kubelet[2609]: E0711 00:26:22.750311 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:22.750591 kubelet[2609]: E0711 00:26:22.750561 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.750591 kubelet[2609]: W0711 00:26:22.750573 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.750646 kubelet[2609]: E0711 00:26:22.750624 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:22.751835 kubelet[2609]: E0711 00:26:22.751799 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.751835 kubelet[2609]: W0711 00:26:22.751817 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.751948 kubelet[2609]: E0711 00:26:22.751861 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:22.752660 kubelet[2609]: E0711 00:26:22.752632 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.752660 kubelet[2609]: W0711 00:26:22.752650 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.752818 kubelet[2609]: E0711 00:26:22.752765 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:22.752916 kubelet[2609]: E0711 00:26:22.752902 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.752916 kubelet[2609]: W0711 00:26:22.752914 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.752989 kubelet[2609]: E0711 00:26:22.752952 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:22.753068 kubelet[2609]: E0711 00:26:22.753057 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.753068 kubelet[2609]: W0711 00:26:22.753066 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.753116 kubelet[2609]: E0711 00:26:22.753078 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:22.753297 kubelet[2609]: E0711 00:26:22.753285 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.753297 kubelet[2609]: W0711 00:26:22.753295 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.753350 kubelet[2609]: E0711 00:26:22.753304 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:22.753613 kubelet[2609]: E0711 00:26:22.753599 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:26:22.753613 kubelet[2609]: W0711 00:26:22.753612 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:26:22.753666 kubelet[2609]: E0711 00:26:22.753621 2609 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:26:23.011929 containerd[1542]: time="2025-07-11T00:26:23.011801872Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:23.012287 containerd[1542]: time="2025-07-11T00:26:23.012169470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 11 00:26:23.013315 containerd[1542]: time="2025-07-11T00:26:23.013278385Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:23.015233 containerd[1542]: time="2025-07-11T00:26:23.015191175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:23.016147 containerd[1542]: time="2025-07-11T00:26:23.016108411Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.016319148s" Jul 11 00:26:23.016186 containerd[1542]: time="2025-07-11T00:26:23.016150011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 11 00:26:23.018985 containerd[1542]: time="2025-07-11T00:26:23.018941717Z" level=info msg="CreateContainer within sandbox \"547a6de939e4b8d113eee962551cfc12f16006c73c3945f8e5b4c863162fc25f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 11 00:26:23.029497 containerd[1542]: time="2025-07-11T00:26:23.029450665Z" level=info msg="CreateContainer within sandbox \"547a6de939e4b8d113eee962551cfc12f16006c73c3945f8e5b4c863162fc25f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"edf1a01c85ff7e15476a1c1a757dfdacb4f70de9c7c9ab991d40f89ee6879d09\"" Jul 11 00:26:23.030101 containerd[1542]: time="2025-07-11T00:26:23.029921263Z" level=info msg="StartContainer for \"edf1a01c85ff7e15476a1c1a757dfdacb4f70de9c7c9ab991d40f89ee6879d09\"" Jul 11 00:26:23.084028 containerd[1542]: time="2025-07-11T00:26:23.083987917Z" level=info msg="StartContainer for \"edf1a01c85ff7e15476a1c1a757dfdacb4f70de9c7c9ab991d40f89ee6879d09\" returns successfully" Jul 11 00:26:23.178341 containerd[1542]: time="2025-07-11T00:26:23.172685480Z" level=info msg="shim disconnected" id=edf1a01c85ff7e15476a1c1a757dfdacb4f70de9c7c9ab991d40f89ee6879d09 namespace=k8s.io Jul 11 00:26:23.178341 containerd[1542]: time="2025-07-11T00:26:23.178340052Z" level=warning msg="cleaning up after shim disconnected" id=edf1a01c85ff7e15476a1c1a757dfdacb4f70de9c7c9ab991d40f89ee6879d09 namespace=k8s.io Jul 11 00:26:23.178549 containerd[1542]: time="2025-07-11T00:26:23.178355012Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:26:23.600112 kubelet[2609]: E0711 00:26:23.600053 2609 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79v4x" podUID="6af0ab6f-2e3c-4302-a4e5-e8d8f53a43ab" Jul 11 00:26:23.686244 kubelet[2609]: I0711 00:26:23.686210 2609 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:26:23.686597 kubelet[2609]: E0711 00:26:23.686538 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:23.687425 containerd[1542]: time="2025-07-11T00:26:23.687364866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 11 00:26:24.010045 systemd[1]: run-containerd-runc-k8s.io-edf1a01c85ff7e15476a1c1a757dfdacb4f70de9c7c9ab991d40f89ee6879d09-runc.cNkpHu.mount: Deactivated successfully. Jul 11 00:26:24.010190 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-edf1a01c85ff7e15476a1c1a757dfdacb4f70de9c7c9ab991d40f89ee6879d09-rootfs.mount: Deactivated successfully. Jul 11 00:26:25.600046 kubelet[2609]: E0711 00:26:25.599950 2609 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-79v4x" podUID="6af0ab6f-2e3c-4302-a4e5-e8d8f53a43ab" Jul 11 00:26:25.902140 containerd[1542]: time="2025-07-11T00:26:25.902035686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:25.902688 containerd[1542]: time="2025-07-11T00:26:25.902656364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 11 00:26:25.903474 containerd[1542]: time="2025-07-11T00:26:25.903439760Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:25.905858 containerd[1542]: time="2025-07-11T00:26:25.905801470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:25.906591 containerd[1542]: time="2025-07-11T00:26:25.906556347Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.219134881s" Jul 11 00:26:25.906591 containerd[1542]: time="2025-07-11T00:26:25.906587187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 11 00:26:25.909210 containerd[1542]: time="2025-07-11T00:26:25.909181055Z" level=info msg="CreateContainer within sandbox \"547a6de939e4b8d113eee962551cfc12f16006c73c3945f8e5b4c863162fc25f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 11 00:26:25.932083 containerd[1542]: time="2025-07-11T00:26:25.932032797Z" level=info msg="CreateContainer within sandbox \"547a6de939e4b8d113eee962551cfc12f16006c73c3945f8e5b4c863162fc25f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d2ab17fa2b4fb4e3de773bc77fe9715a2cdba61eadc3c737b380db1fde9b19ee\"" Jul 11 00:26:25.932607 containerd[1542]: time="2025-07-11T00:26:25.932579594Z" level=info msg="StartContainer for \"d2ab17fa2b4fb4e3de773bc77fe9715a2cdba61eadc3c737b380db1fde9b19ee\"" Jul 11 00:26:25.986624 containerd[1542]: time="2025-07-11T00:26:25.985147687Z" level=info msg="StartContainer for \"d2ab17fa2b4fb4e3de773bc77fe9715a2cdba61eadc3c737b380db1fde9b19ee\" returns successfully" Jul 11 00:26:26.757476 kubelet[2609]: I0711 00:26:26.757438 2609 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 11 00:26:26.761733 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2ab17fa2b4fb4e3de773bc77fe9715a2cdba61eadc3c737b380db1fde9b19ee-rootfs.mount: Deactivated successfully. Jul 11 00:26:26.766523 containerd[1542]: time="2025-07-11T00:26:26.766277634Z" level=info msg="shim disconnected" id=d2ab17fa2b4fb4e3de773bc77fe9715a2cdba61eadc3c737b380db1fde9b19ee namespace=k8s.io Jul 11 00:26:26.766523 containerd[1542]: time="2025-07-11T00:26:26.766352793Z" level=warning msg="cleaning up after shim disconnected" id=d2ab17fa2b4fb4e3de773bc77fe9715a2cdba61eadc3c737b380db1fde9b19ee namespace=k8s.io Jul 11 00:26:26.766523 containerd[1542]: time="2025-07-11T00:26:26.766364273Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:26:26.974122 kubelet[2609]: I0711 00:26:26.974078 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68360d6b-1341-4cc8-9b2e-e1fda0c521fc-config-volume\") pod \"coredns-7c65d6cfc9-677wx\" (UID: \"68360d6b-1341-4cc8-9b2e-e1fda0c521fc\") " pod="kube-system/coredns-7c65d6cfc9-677wx" Jul 11 00:26:26.974122 kubelet[2609]: I0711 00:26:26.974127 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqn77\" (UniqueName: \"kubernetes.io/projected/65271ea6-5134-4ff0-a88f-9119ebccb488-kube-api-access-nqn77\") pod \"coredns-7c65d6cfc9-qjjwl\" (UID: \"65271ea6-5134-4ff0-a88f-9119ebccb488\") " pod="kube-system/coredns-7c65d6cfc9-qjjwl" Jul 11 00:26:26.974304 kubelet[2609]: I0711 00:26:26.974147 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f8ac4e62-8a44-4730-8b38-4b387684fc0f-whisker-backend-key-pair\") pod \"whisker-7457bc779-ccvq6\" (UID: \"f8ac4e62-8a44-4730-8b38-4b387684fc0f\") " pod="calico-system/whisker-7457bc779-ccvq6" Jul 11 00:26:26.974304 kubelet[2609]: I0711 00:26:26.974169 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2771ca54-4f98-4224-98e6-fa1c41a6b452-config\") pod \"goldmane-58fd7646b9-4smck\" (UID: \"2771ca54-4f98-4224-98e6-fa1c41a6b452\") " pod="calico-system/goldmane-58fd7646b9-4smck" Jul 11 00:26:26.974304 kubelet[2609]: I0711 00:26:26.974184 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2771ca54-4f98-4224-98e6-fa1c41a6b452-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-4smck\" (UID: \"2771ca54-4f98-4224-98e6-fa1c41a6b452\") " pod="calico-system/goldmane-58fd7646b9-4smck" Jul 11 00:26:26.974304 kubelet[2609]: I0711 00:26:26.974202 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/240131e8-c1af-4198-9629-bd8842d57a9c-calico-apiserver-certs\") pod \"calico-apiserver-76d459d97b-xlnxl\" (UID: \"240131e8-c1af-4198-9629-bd8842d57a9c\") " pod="calico-apiserver/calico-apiserver-76d459d97b-xlnxl" Jul 11 00:26:26.974304 kubelet[2609]: I0711 00:26:26.974220 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plhvw\" (UniqueName: \"kubernetes.io/projected/f54b44fa-bd20-4c12-91fe-34f4011b849a-kube-api-access-plhvw\") pod \"calico-apiserver-76d459d97b-9692q\" (UID: \"f54b44fa-bd20-4c12-91fe-34f4011b849a\") " pod="calico-apiserver/calico-apiserver-76d459d97b-9692q" Jul 11 00:26:26.974435 kubelet[2609]: I0711 00:26:26.974238 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gszxz\" (UniqueName: \"kubernetes.io/projected/f8ac4e62-8a44-4730-8b38-4b387684fc0f-kube-api-access-gszxz\") pod \"whisker-7457bc779-ccvq6\" (UID: \"f8ac4e62-8a44-4730-8b38-4b387684fc0f\") " pod="calico-system/whisker-7457bc779-ccvq6" Jul 11 00:26:26.974435 kubelet[2609]: I0711 00:26:26.974257 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8b0240e-8f74-491a-84c4-c496b1ecf4cc-tigera-ca-bundle\") pod \"calico-kube-controllers-6bbc6b4cc-xqcp5\" (UID: \"f8b0240e-8f74-491a-84c4-c496b1ecf4cc\") " pod="calico-system/calico-kube-controllers-6bbc6b4cc-xqcp5" Jul 11 00:26:26.974435 kubelet[2609]: I0711 00:26:26.974273 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jl7h\" (UniqueName: \"kubernetes.io/projected/f8b0240e-8f74-491a-84c4-c496b1ecf4cc-kube-api-access-2jl7h\") pod \"calico-kube-controllers-6bbc6b4cc-xqcp5\" (UID: \"f8b0240e-8f74-491a-84c4-c496b1ecf4cc\") " pod="calico-system/calico-kube-controllers-6bbc6b4cc-xqcp5" Jul 11 00:26:26.974435 kubelet[2609]: I0711 00:26:26.974292 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65271ea6-5134-4ff0-a88f-9119ebccb488-config-volume\") pod \"coredns-7c65d6cfc9-qjjwl\" (UID: \"65271ea6-5134-4ff0-a88f-9119ebccb488\") " pod="kube-system/coredns-7c65d6cfc9-qjjwl" Jul 11 00:26:26.974435 kubelet[2609]: I0711 00:26:26.974307 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8ac4e62-8a44-4730-8b38-4b387684fc0f-whisker-ca-bundle\") pod \"whisker-7457bc779-ccvq6\" (UID: \"f8ac4e62-8a44-4730-8b38-4b387684fc0f\") " pod="calico-system/whisker-7457bc779-ccvq6" Jul 11 00:26:26.974590 kubelet[2609]: I0711 00:26:26.974332 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngdns\" (UniqueName: \"kubernetes.io/projected/2771ca54-4f98-4224-98e6-fa1c41a6b452-kube-api-access-ngdns\") pod \"goldmane-58fd7646b9-4smck\" (UID: \"2771ca54-4f98-4224-98e6-fa1c41a6b452\") " pod="calico-system/goldmane-58fd7646b9-4smck" Jul 11 00:26:26.974590 kubelet[2609]: I0711 00:26:26.974349 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f54b44fa-bd20-4c12-91fe-34f4011b849a-calico-apiserver-certs\") pod \"calico-apiserver-76d459d97b-9692q\" (UID: \"f54b44fa-bd20-4c12-91fe-34f4011b849a\") " pod="calico-apiserver/calico-apiserver-76d459d97b-9692q" Jul 11 00:26:26.974590 kubelet[2609]: I0711 00:26:26.974367 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhxnm\" (UniqueName: \"kubernetes.io/projected/68360d6b-1341-4cc8-9b2e-e1fda0c521fc-kube-api-access-vhxnm\") pod \"coredns-7c65d6cfc9-677wx\" (UID: \"68360d6b-1341-4cc8-9b2e-e1fda0c521fc\") " pod="kube-system/coredns-7c65d6cfc9-677wx" Jul 11 00:26:26.974590 kubelet[2609]: I0711 00:26:26.974382 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2771ca54-4f98-4224-98e6-fa1c41a6b452-goldmane-key-pair\") pod \"goldmane-58fd7646b9-4smck\" (UID: \"2771ca54-4f98-4224-98e6-fa1c41a6b452\") " pod="calico-system/goldmane-58fd7646b9-4smck" Jul 11 00:26:26.974590 kubelet[2609]: I0711 00:26:26.974400 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh66q\" (UniqueName: \"kubernetes.io/projected/240131e8-c1af-4198-9629-bd8842d57a9c-kube-api-access-zh66q\") pod \"calico-apiserver-76d459d97b-xlnxl\" (UID: \"240131e8-c1af-4198-9629-bd8842d57a9c\") " pod="calico-apiserver/calico-apiserver-76d459d97b-xlnxl" Jul 11 00:26:27.106914 kubelet[2609]: E0711 00:26:27.106749 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:27.108119 containerd[1542]: time="2025-07-11T00:26:27.108085990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-677wx,Uid:68360d6b-1341-4cc8-9b2e-e1fda0c521fc,Namespace:kube-system,Attempt:0,}" Jul 11 00:26:27.111878 containerd[1542]: time="2025-07-11T00:26:27.111757416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7457bc779-ccvq6,Uid:f8ac4e62-8a44-4730-8b38-4b387684fc0f,Namespace:calico-system,Attempt:0,}" Jul 11 00:26:27.112160 kubelet[2609]: E0711 00:26:27.112124 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:27.113982 containerd[1542]: time="2025-07-11T00:26:27.113938648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qjjwl,Uid:65271ea6-5134-4ff0-a88f-9119ebccb488,Namespace:kube-system,Attempt:0,}" Jul 11 00:26:27.114255 containerd[1542]: time="2025-07-11T00:26:27.114215887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-4smck,Uid:2771ca54-4f98-4224-98e6-fa1c41a6b452,Namespace:calico-system,Attempt:0,}" Jul 11 00:26:27.116070 containerd[1542]: time="2025-07-11T00:26:27.115980440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bbc6b4cc-xqcp5,Uid:f8b0240e-8f74-491a-84c4-c496b1ecf4cc,Namespace:calico-system,Attempt:0,}" Jul 11 00:26:27.118888 containerd[1542]: time="2025-07-11T00:26:27.118860589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d459d97b-xlnxl,Uid:240131e8-c1af-4198-9629-bd8842d57a9c,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:26:27.121906 containerd[1542]: time="2025-07-11T00:26:27.121873418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d459d97b-9692q,Uid:f54b44fa-bd20-4c12-91fe-34f4011b849a,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:26:27.604356 containerd[1542]: time="2025-07-11T00:26:27.603833385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-79v4x,Uid:6af0ab6f-2e3c-4302-a4e5-e8d8f53a43ab,Namespace:calico-system,Attempt:0,}" Jul 11 00:26:27.635506 containerd[1542]: time="2025-07-11T00:26:27.635460225Z" level=error msg="Failed to destroy network for sandbox \"6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.635992 containerd[1542]: time="2025-07-11T00:26:27.635964983Z" level=error msg="encountered an error cleaning up failed sandbox \"6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.636103 containerd[1542]: time="2025-07-11T00:26:27.636083022Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d459d97b-xlnxl,Uid:240131e8-c1af-4198-9629-bd8842d57a9c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.642625 kubelet[2609]: E0711 00:26:27.642497 2609 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.643132 containerd[1542]: time="2025-07-11T00:26:27.642788117Z" level=error msg="Failed to destroy network for sandbox \"a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.643132 containerd[1542]: time="2025-07-11T00:26:27.643098396Z" level=error msg="encountered an error cleaning up failed sandbox \"a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.643220 containerd[1542]: time="2025-07-11T00:26:27.643138035Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bbc6b4cc-xqcp5,Uid:f8b0240e-8f74-491a-84c4-c496b1ecf4cc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.643848 kubelet[2609]: E0711 00:26:27.643330 2609 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.643848 kubelet[2609]: E0711 00:26:27.643382 2609 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bbc6b4cc-xqcp5" Jul 11 00:26:27.643848 kubelet[2609]: E0711 00:26:27.643412 2609 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bbc6b4cc-xqcp5" Jul 11 00:26:27.643981 kubelet[2609]: E0711 00:26:27.643460 2609 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6bbc6b4cc-xqcp5_calico-system(f8b0240e-8f74-491a-84c4-c496b1ecf4cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6bbc6b4cc-xqcp5_calico-system(f8b0240e-8f74-491a-84c4-c496b1ecf4cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bbc6b4cc-xqcp5" podUID="f8b0240e-8f74-491a-84c4-c496b1ecf4cc" Jul 11 00:26:27.645274 kubelet[2609]: E0711 00:26:27.644897 2609 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76d459d97b-xlnxl" Jul 11 00:26:27.645274 kubelet[2609]: E0711 00:26:27.644944 2609 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76d459d97b-xlnxl" Jul 11 00:26:27.645274 kubelet[2609]: E0711 00:26:27.644981 2609 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76d459d97b-xlnxl_calico-apiserver(240131e8-c1af-4198-9629-bd8842d57a9c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76d459d97b-xlnxl_calico-apiserver(240131e8-c1af-4198-9629-bd8842d57a9c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76d459d97b-xlnxl" podUID="240131e8-c1af-4198-9629-bd8842d57a9c" Jul 11 00:26:27.650669 containerd[1542]: time="2025-07-11T00:26:27.650623167Z" level=error msg="Failed to destroy network for sandbox \"ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.651225 containerd[1542]: time="2025-07-11T00:26:27.651183605Z" level=error msg="encountered an error cleaning up failed sandbox \"ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.651327 containerd[1542]: time="2025-07-11T00:26:27.651239245Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qjjwl,Uid:65271ea6-5134-4ff0-a88f-9119ebccb488,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.651456 kubelet[2609]: E0711 00:26:27.651422 2609 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.651506 kubelet[2609]: E0711 00:26:27.651471 2609 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-qjjwl" Jul 11 00:26:27.651506 kubelet[2609]: E0711 00:26:27.651489 2609 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-qjjwl" Jul 11 00:26:27.651574 kubelet[2609]: E0711 00:26:27.651519 2609 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-qjjwl_kube-system(65271ea6-5134-4ff0-a88f-9119ebccb488)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-qjjwl_kube-system(65271ea6-5134-4ff0-a88f-9119ebccb488)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-qjjwl" podUID="65271ea6-5134-4ff0-a88f-9119ebccb488" Jul 11 00:26:27.653470 containerd[1542]: time="2025-07-11T00:26:27.653430756Z" level=error msg="Failed to destroy network for sandbox \"4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.655172 containerd[1542]: time="2025-07-11T00:26:27.655138430Z" level=error msg="encountered an error cleaning up failed sandbox \"4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.655361 containerd[1542]: time="2025-07-11T00:26:27.655284589Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-677wx,Uid:68360d6b-1341-4cc8-9b2e-e1fda0c521fc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.657026 kubelet[2609]: E0711 00:26:27.656985 2609 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.657121 kubelet[2609]: E0711 00:26:27.657038 2609 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-677wx" Jul 11 00:26:27.657121 kubelet[2609]: E0711 00:26:27.657055 2609 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-677wx" Jul 11 00:26:27.657206 kubelet[2609]: E0711 00:26:27.657091 2609 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-677wx_kube-system(68360d6b-1341-4cc8-9b2e-e1fda0c521fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-677wx_kube-system(68360d6b-1341-4cc8-9b2e-e1fda0c521fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-677wx" podUID="68360d6b-1341-4cc8-9b2e-e1fda0c521fc" Jul 11 00:26:27.658124 containerd[1542]: time="2025-07-11T00:26:27.657910539Z" level=error msg="Failed to destroy network for sandbox \"691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.658743 containerd[1542]: time="2025-07-11T00:26:27.658697056Z" level=error msg="encountered an error cleaning up failed sandbox \"691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.658946 containerd[1542]: time="2025-07-11T00:26:27.658744336Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d459d97b-9692q,Uid:f54b44fa-bd20-4c12-91fe-34f4011b849a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.659216 kubelet[2609]: E0711 00:26:27.659183 2609 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.659267 kubelet[2609]: E0711 00:26:27.659232 2609 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76d459d97b-9692q" Jul 11 00:26:27.659293 kubelet[2609]: E0711 00:26:27.659259 2609 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76d459d97b-9692q" Jul 11 00:26:27.659423 kubelet[2609]: E0711 00:26:27.659331 2609 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76d459d97b-9692q_calico-apiserver(f54b44fa-bd20-4c12-91fe-34f4011b849a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76d459d97b-9692q_calico-apiserver(f54b44fa-bd20-4c12-91fe-34f4011b849a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76d459d97b-9692q" podUID="f54b44fa-bd20-4c12-91fe-34f4011b849a" Jul 11 00:26:27.663488 containerd[1542]: time="2025-07-11T00:26:27.663454158Z" level=error msg="Failed to destroy network for sandbox \"21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.663769 containerd[1542]: time="2025-07-11T00:26:27.663743237Z" level=error msg="encountered an error cleaning up failed sandbox \"21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.663832 containerd[1542]: time="2025-07-11T00:26:27.663793957Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7457bc779-ccvq6,Uid:f8ac4e62-8a44-4730-8b38-4b387684fc0f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.663991 kubelet[2609]: E0711 00:26:27.663964 2609 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.664945 containerd[1542]: time="2025-07-11T00:26:27.664901433Z" level=error msg="Failed to destroy network for sandbox \"8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.665224 containerd[1542]: time="2025-07-11T00:26:27.665187912Z" level=error msg="encountered an error cleaning up failed sandbox \"8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.665271 containerd[1542]: time="2025-07-11T00:26:27.665230911Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-4smck,Uid:2771ca54-4f98-4224-98e6-fa1c41a6b452,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.666006 kubelet[2609]: E0711 00:26:27.665974 2609 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.666078 kubelet[2609]: E0711 00:26:27.666023 2609 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-4smck" Jul 11 00:26:27.666078 kubelet[2609]: E0711 00:26:27.666041 2609 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-4smck" Jul 11 00:26:27.666210 kubelet[2609]: E0711 00:26:27.666074 2609 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-4smck_calico-system(2771ca54-4f98-4224-98e6-fa1c41a6b452)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-4smck_calico-system(2771ca54-4f98-4224-98e6-fa1c41a6b452)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-4smck" podUID="2771ca54-4f98-4224-98e6-fa1c41a6b452" Jul 11 00:26:27.668093 kubelet[2609]: E0711 00:26:27.664007 2609 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7457bc779-ccvq6" Jul 11 00:26:27.668093 kubelet[2609]: E0711 00:26:27.667986 2609 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7457bc779-ccvq6" Jul 11 00:26:27.668093 kubelet[2609]: E0711 00:26:27.668031 2609 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7457bc779-ccvq6_calico-system(f8ac4e62-8a44-4730-8b38-4b387684fc0f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7457bc779-ccvq6_calico-system(f8ac4e62-8a44-4730-8b38-4b387684fc0f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7457bc779-ccvq6" podUID="f8ac4e62-8a44-4730-8b38-4b387684fc0f" Jul 11 00:26:27.687861 containerd[1542]: time="2025-07-11T00:26:27.687793306Z" level=error msg="Failed to destroy network for sandbox \"d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.688167 containerd[1542]: time="2025-07-11T00:26:27.688128104Z" level=error msg="encountered an error cleaning up failed sandbox \"d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.688201 containerd[1542]: time="2025-07-11T00:26:27.688177904Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-79v4x,Uid:6af0ab6f-2e3c-4302-a4e5-e8d8f53a43ab,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.688421 kubelet[2609]: E0711 00:26:27.688387 2609 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.688480 kubelet[2609]: E0711 00:26:27.688445 2609 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-79v4x" Jul 11 00:26:27.688480 kubelet[2609]: E0711 00:26:27.688464 2609 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-79v4x" Jul 11 00:26:27.688540 kubelet[2609]: E0711 00:26:27.688509 2609 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-79v4x_calico-system(6af0ab6f-2e3c-4302-a4e5-e8d8f53a43ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-79v4x_calico-system(6af0ab6f-2e3c-4302-a4e5-e8d8f53a43ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-79v4x" podUID="6af0ab6f-2e3c-4302-a4e5-e8d8f53a43ab" Jul 11 00:26:27.695237 kubelet[2609]: I0711 00:26:27.695209 2609 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" Jul 11 00:26:27.696261 kubelet[2609]: I0711 00:26:27.696172 2609 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" Jul 11 00:26:27.696346 containerd[1542]: time="2025-07-11T00:26:27.696197394Z" level=info msg="StopPodSandbox for \"a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866\"" Jul 11 00:26:27.696414 containerd[1542]: time="2025-07-11T00:26:27.696363073Z" level=info msg="Ensure that sandbox a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866 in task-service has been cleanup successfully" Jul 11 00:26:27.696893 containerd[1542]: time="2025-07-11T00:26:27.696867591Z" level=info msg="StopPodSandbox for \"691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c\"" Jul 11 00:26:27.697046 containerd[1542]: time="2025-07-11T00:26:27.697002031Z" level=info msg="Ensure that sandbox 691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c in task-service has been cleanup successfully" Jul 11 00:26:27.698072 kubelet[2609]: I0711 00:26:27.697979 2609 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" Jul 11 00:26:27.698712 containerd[1542]: time="2025-07-11T00:26:27.698555425Z" level=info msg="StopPodSandbox for \"4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301\"" Jul 11 00:26:27.698814 containerd[1542]: time="2025-07-11T00:26:27.698728744Z" level=info msg="Ensure that sandbox 4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301 in task-service has been cleanup successfully" Jul 11 00:26:27.703763 containerd[1542]: time="2025-07-11T00:26:27.703716245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 11 00:26:27.705116 kubelet[2609]: I0711 00:26:27.704998 2609 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" Jul 11 00:26:27.710628 kubelet[2609]: I0711 00:26:27.710209 2609 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" Jul 11 00:26:27.711061 containerd[1542]: time="2025-07-11T00:26:27.711025977Z" level=info msg="StopPodSandbox for \"d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e\"" Jul 11 00:26:27.711244 containerd[1542]: time="2025-07-11T00:26:27.711216536Z" level=info msg="Ensure that sandbox d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e in task-service has been cleanup successfully" Jul 11 00:26:27.711891 containerd[1542]: time="2025-07-11T00:26:27.711868734Z" level=info msg="StopPodSandbox for \"6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3\"" Jul 11 00:26:27.713253 kubelet[2609]: I0711 00:26:27.713229 2609 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" Jul 11 00:26:27.713510 containerd[1542]: time="2025-07-11T00:26:27.713437888Z" level=info msg="Ensure that sandbox 6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3 in task-service has been cleanup successfully" Jul 11 00:26:27.713706 containerd[1542]: time="2025-07-11T00:26:27.713670927Z" level=info msg="StopPodSandbox for \"ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084\"" Jul 11 00:26:27.713875 containerd[1542]: time="2025-07-11T00:26:27.713854126Z" level=info msg="Ensure that sandbox ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084 in task-service has been cleanup successfully" Jul 11 00:26:27.714625 kubelet[2609]: I0711 00:26:27.714583 2609 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" Jul 11 00:26:27.715919 containerd[1542]: time="2025-07-11T00:26:27.715851519Z" level=info msg="StopPodSandbox for \"8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241\"" Jul 11 00:26:27.716614 containerd[1542]: time="2025-07-11T00:26:27.716589956Z" level=info msg="Ensure that sandbox 8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241 in task-service has been cleanup successfully" Jul 11 00:26:27.717099 kubelet[2609]: I0711 00:26:27.717075 2609 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" Jul 11 00:26:27.718209 containerd[1542]: time="2025-07-11T00:26:27.718052790Z" level=info msg="StopPodSandbox for \"21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7\"" Jul 11 00:26:27.719184 containerd[1542]: time="2025-07-11T00:26:27.719144026Z" level=info msg="Ensure that sandbox 21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7 in task-service has been cleanup successfully" Jul 11 00:26:27.744453 containerd[1542]: time="2025-07-11T00:26:27.744394210Z" level=error msg="StopPodSandbox for \"691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c\" failed" error="failed to destroy network for sandbox \"691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.744660 kubelet[2609]: E0711 00:26:27.744623 2609 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" Jul 11 00:26:27.744731 kubelet[2609]: E0711 00:26:27.744678 2609 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c"} Jul 11 00:26:27.744764 kubelet[2609]: E0711 00:26:27.744744 2609 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f54b44fa-bd20-4c12-91fe-34f4011b849a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:26:27.744817 kubelet[2609]: E0711 00:26:27.744768 2609 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f54b44fa-bd20-4c12-91fe-34f4011b849a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76d459d97b-9692q" podUID="f54b44fa-bd20-4c12-91fe-34f4011b849a" Jul 11 00:26:27.752894 containerd[1542]: time="2025-07-11T00:26:27.752843898Z" level=error msg="StopPodSandbox for \"4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301\" failed" error="failed to destroy network for sandbox \"4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.753126 kubelet[2609]: E0711 00:26:27.753080 2609 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" Jul 11 00:26:27.753189 kubelet[2609]: E0711 00:26:27.753139 2609 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301"} Jul 11 00:26:27.753189 kubelet[2609]: E0711 00:26:27.753172 2609 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"68360d6b-1341-4cc8-9b2e-e1fda0c521fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:26:27.753395 kubelet[2609]: E0711 00:26:27.753194 2609 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"68360d6b-1341-4cc8-9b2e-e1fda0c521fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-677wx" podUID="68360d6b-1341-4cc8-9b2e-e1fda0c521fc" Jul 11 00:26:27.753632 containerd[1542]: time="2025-07-11T00:26:27.753595375Z" level=error msg="StopPodSandbox for \"a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866\" failed" error="failed to destroy network for sandbox \"a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.753802 kubelet[2609]: E0711 00:26:27.753770 2609 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" Jul 11 00:26:27.753874 kubelet[2609]: E0711 00:26:27.753809 2609 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866"} Jul 11 00:26:27.753874 kubelet[2609]: E0711 00:26:27.753865 2609 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f8b0240e-8f74-491a-84c4-c496b1ecf4cc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:26:27.753968 kubelet[2609]: E0711 00:26:27.753888 2609 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f8b0240e-8f74-491a-84c4-c496b1ecf4cc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bbc6b4cc-xqcp5" podUID="f8b0240e-8f74-491a-84c4-c496b1ecf4cc" Jul 11 00:26:27.760519 containerd[1542]: time="2025-07-11T00:26:27.760408829Z" level=error msg="StopPodSandbox for \"8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241\" failed" error="failed to destroy network for sandbox \"8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.760647 kubelet[2609]: E0711 00:26:27.760616 2609 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" Jul 11 00:26:27.761144 kubelet[2609]: E0711 00:26:27.760655 2609 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241"} Jul 11 00:26:27.761144 kubelet[2609]: E0711 00:26:27.760681 2609 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2771ca54-4f98-4224-98e6-fa1c41a6b452\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:26:27.761144 kubelet[2609]: E0711 00:26:27.760703 2609 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2771ca54-4f98-4224-98e6-fa1c41a6b452\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-4smck" podUID="2771ca54-4f98-4224-98e6-fa1c41a6b452" Jul 11 00:26:27.766348 containerd[1542]: time="2025-07-11T00:26:27.766310927Z" level=error msg="StopPodSandbox for \"6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3\" failed" error="failed to destroy network for sandbox \"6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.767259 kubelet[2609]: E0711 00:26:27.766685 2609 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" Jul 11 00:26:27.767259 kubelet[2609]: E0711 00:26:27.766731 2609 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3"} Jul 11 00:26:27.767259 kubelet[2609]: E0711 00:26:27.766761 2609 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"240131e8-c1af-4198-9629-bd8842d57a9c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:26:27.767259 kubelet[2609]: E0711 00:26:27.766788 2609 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"240131e8-c1af-4198-9629-bd8842d57a9c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76d459d97b-xlnxl" podUID="240131e8-c1af-4198-9629-bd8842d57a9c" Jul 11 00:26:27.777063 containerd[1542]: time="2025-07-11T00:26:27.777022646Z" level=error msg="StopPodSandbox for \"21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7\" failed" error="failed to destroy network for sandbox \"21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.777439 kubelet[2609]: E0711 00:26:27.777401 2609 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" Jul 11 00:26:27.777513 kubelet[2609]: E0711 00:26:27.777448 2609 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7"} Jul 11 00:26:27.777513 kubelet[2609]: E0711 00:26:27.777483 2609 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f8ac4e62-8a44-4730-8b38-4b387684fc0f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:26:27.777595 kubelet[2609]: E0711 00:26:27.777507 2609 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f8ac4e62-8a44-4730-8b38-4b387684fc0f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7457bc779-ccvq6" podUID="f8ac4e62-8a44-4730-8b38-4b387684fc0f" Jul 11 00:26:27.782437 containerd[1542]: time="2025-07-11T00:26:27.782401666Z" level=error msg="StopPodSandbox for \"d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e\" failed" error="failed to destroy network for sandbox \"d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.783074 kubelet[2609]: E0711 00:26:27.782755 2609 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" Jul 11 00:26:27.783074 kubelet[2609]: E0711 00:26:27.782798 2609 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e"} Jul 11 00:26:27.783074 kubelet[2609]: E0711 00:26:27.782838 2609 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6af0ab6f-2e3c-4302-a4e5-e8d8f53a43ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:26:27.783074 kubelet[2609]: E0711 00:26:27.782858 2609 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6af0ab6f-2e3c-4302-a4e5-e8d8f53a43ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-79v4x" podUID="6af0ab6f-2e3c-4302-a4e5-e8d8f53a43ab" Jul 11 00:26:27.783266 containerd[1542]: time="2025-07-11T00:26:27.783007623Z" level=error msg="StopPodSandbox for \"ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084\" failed" error="failed to destroy network for sandbox \"ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:26:27.783463 kubelet[2609]: E0711 00:26:27.783420 2609 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" Jul 11 00:26:27.783463 kubelet[2609]: E0711 00:26:27.783455 2609 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084"} Jul 11 00:26:27.783540 kubelet[2609]: E0711 00:26:27.783484 2609 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"65271ea6-5134-4ff0-a88f-9119ebccb488\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:26:27.783540 kubelet[2609]: E0711 00:26:27.783502 2609 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"65271ea6-5134-4ff0-a88f-9119ebccb488\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-qjjwl" podUID="65271ea6-5134-4ff0-a88f-9119ebccb488" Jul 11 00:26:31.800539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount379570701.mount: Deactivated successfully. Jul 11 00:26:32.054160 containerd[1542]: time="2025-07-11T00:26:32.054044788Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:32.055150 containerd[1542]: time="2025-07-11T00:26:32.055107825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 11 00:26:32.057749 containerd[1542]: time="2025-07-11T00:26:32.057695658Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:32.060390 containerd[1542]: time="2025-07-11T00:26:32.060161611Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:32.067469 containerd[1542]: time="2025-07-11T00:26:32.067432471Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.363662066s" Jul 11 00:26:32.067469 containerd[1542]: time="2025-07-11T00:26:32.067469351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 11 00:26:32.076615 containerd[1542]: time="2025-07-11T00:26:32.076492886Z" level=info msg="CreateContainer within sandbox \"547a6de939e4b8d113eee962551cfc12f16006c73c3945f8e5b4c863162fc25f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 11 00:26:32.096736 containerd[1542]: time="2025-07-11T00:26:32.096664390Z" level=info msg="CreateContainer within sandbox \"547a6de939e4b8d113eee962551cfc12f16006c73c3945f8e5b4c863162fc25f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ecbe7339f952f334e48374529115651cbcce48d85951c8d3a3533acf854931b3\"" Jul 11 00:26:32.097300 containerd[1542]: time="2025-07-11T00:26:32.097272909Z" level=info msg="StartContainer for \"ecbe7339f952f334e48374529115651cbcce48d85951c8d3a3533acf854931b3\"" Jul 11 00:26:32.226462 containerd[1542]: time="2025-07-11T00:26:32.226409193Z" level=info msg="StartContainer for \"ecbe7339f952f334e48374529115651cbcce48d85951c8d3a3533acf854931b3\" returns successfully" Jul 11 00:26:32.435262 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 11 00:26:32.435406 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 11 00:26:32.569923 containerd[1542]: time="2025-07-11T00:26:32.569877407Z" level=info msg="StopPodSandbox for \"21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7\"" Jul 11 00:26:32.883702 containerd[1542]: 2025-07-11 00:26:32.712 [INFO][3909] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" Jul 11 00:26:32.883702 containerd[1542]: 2025-07-11 00:26:32.712 [INFO][3909] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" iface="eth0" netns="/var/run/netns/cni-14962b91-f1bd-cf08-ff4e-662f460c478e" Jul 11 00:26:32.883702 containerd[1542]: 2025-07-11 00:26:32.713 [INFO][3909] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" iface="eth0" netns="/var/run/netns/cni-14962b91-f1bd-cf08-ff4e-662f460c478e" Jul 11 00:26:32.883702 containerd[1542]: 2025-07-11 00:26:32.716 [INFO][3909] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" iface="eth0" netns="/var/run/netns/cni-14962b91-f1bd-cf08-ff4e-662f460c478e" Jul 11 00:26:32.883702 containerd[1542]: 2025-07-11 00:26:32.716 [INFO][3909] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" Jul 11 00:26:32.883702 containerd[1542]: 2025-07-11 00:26:32.717 [INFO][3909] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" Jul 11 00:26:32.883702 containerd[1542]: 2025-07-11 00:26:32.856 [INFO][3921] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" HandleID="k8s-pod-network.21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" Workload="localhost-k8s-whisker--7457bc779--ccvq6-eth0" Jul 11 00:26:32.883702 containerd[1542]: 2025-07-11 00:26:32.856 [INFO][3921] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:26:32.883702 containerd[1542]: 2025-07-11 00:26:32.856 [INFO][3921] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:26:32.883702 containerd[1542]: 2025-07-11 00:26:32.872 [WARNING][3921] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" HandleID="k8s-pod-network.21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" Workload="localhost-k8s-whisker--7457bc779--ccvq6-eth0" Jul 11 00:26:32.883702 containerd[1542]: 2025-07-11 00:26:32.872 [INFO][3921] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" HandleID="k8s-pod-network.21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" Workload="localhost-k8s-whisker--7457bc779--ccvq6-eth0" Jul 11 00:26:32.883702 containerd[1542]: 2025-07-11 00:26:32.874 [INFO][3921] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:26:32.883702 containerd[1542]: 2025-07-11 00:26:32.878 [INFO][3909] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" Jul 11 00:26:32.883702 containerd[1542]: time="2025-07-11T00:26:32.882585266Z" level=info msg="TearDown network for sandbox \"21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7\" successfully" Jul 11 00:26:32.883702 containerd[1542]: time="2025-07-11T00:26:32.882615946Z" level=info msg="StopPodSandbox for \"21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7\" returns successfully" Jul 11 00:26:32.882949 systemd[1]: run-netns-cni\x2d14962b91\x2df1bd\x2dcf08\x2dff4e\x2d662f460c478e.mount: Deactivated successfully. Jul 11 00:26:32.940156 kubelet[2609]: I0711 00:26:32.939906 2609 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:26:32.941269 kubelet[2609]: E0711 00:26:32.940254 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:32.983806 kubelet[2609]: I0711 00:26:32.983721 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mk4zm" podStartSLOduration=2.240116533 podStartE2EDuration="13.98272343s" podCreationTimestamp="2025-07-11 00:26:19 +0000 UTC" firstStartedPulling="2025-07-11 00:26:20.325681532 +0000 UTC m=+21.815239150" lastFinishedPulling="2025-07-11 00:26:32.068288429 +0000 UTC m=+33.557846047" observedRunningTime="2025-07-11 00:26:32.747517518 +0000 UTC m=+34.237075136" watchObservedRunningTime="2025-07-11 00:26:32.98272343 +0000 UTC m=+34.472281048" Jul 11 00:26:33.023795 kubelet[2609]: I0711 00:26:33.023760 2609 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8ac4e62-8a44-4730-8b38-4b387684fc0f-whisker-ca-bundle\") pod \"f8ac4e62-8a44-4730-8b38-4b387684fc0f\" (UID: \"f8ac4e62-8a44-4730-8b38-4b387684fc0f\") " Jul 11 00:26:33.024010 kubelet[2609]: I0711 00:26:33.023808 2609 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f8ac4e62-8a44-4730-8b38-4b387684fc0f-whisker-backend-key-pair\") pod \"f8ac4e62-8a44-4730-8b38-4b387684fc0f\" (UID: \"f8ac4e62-8a44-4730-8b38-4b387684fc0f\") " Jul 11 00:26:33.024010 kubelet[2609]: I0711 00:26:33.023850 2609 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gszxz\" (UniqueName: \"kubernetes.io/projected/f8ac4e62-8a44-4730-8b38-4b387684fc0f-kube-api-access-gszxz\") pod \"f8ac4e62-8a44-4730-8b38-4b387684fc0f\" (UID: \"f8ac4e62-8a44-4730-8b38-4b387684fc0f\") " Jul 11 00:26:33.029474 kubelet[2609]: I0711 00:26:33.029437 2609 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8ac4e62-8a44-4730-8b38-4b387684fc0f-kube-api-access-gszxz" (OuterVolumeSpecName: "kube-api-access-gszxz") pod "f8ac4e62-8a44-4730-8b38-4b387684fc0f" (UID: "f8ac4e62-8a44-4730-8b38-4b387684fc0f"). InnerVolumeSpecName "kube-api-access-gszxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 11 00:26:33.030957 kubelet[2609]: I0711 00:26:33.030926 2609 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8ac4e62-8a44-4730-8b38-4b387684fc0f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f8ac4e62-8a44-4730-8b38-4b387684fc0f" (UID: "f8ac4e62-8a44-4730-8b38-4b387684fc0f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 11 00:26:33.031014 systemd[1]: var-lib-kubelet-pods-f8ac4e62\x2d8a44\x2d4730\x2d8b38\x2d4b387684fc0f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgszxz.mount: Deactivated successfully. Jul 11 00:26:33.040941 kubelet[2609]: I0711 00:26:33.040911 2609 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8ac4e62-8a44-4730-8b38-4b387684fc0f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f8ac4e62-8a44-4730-8b38-4b387684fc0f" (UID: "f8ac4e62-8a44-4730-8b38-4b387684fc0f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 11 00:26:33.042525 systemd[1]: var-lib-kubelet-pods-f8ac4e62\x2d8a44\x2d4730\x2d8b38\x2d4b387684fc0f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 11 00:26:33.124408 kubelet[2609]: I0711 00:26:33.124372 2609 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gszxz\" (UniqueName: \"kubernetes.io/projected/f8ac4e62-8a44-4730-8b38-4b387684fc0f-kube-api-access-gszxz\") on node \"localhost\" DevicePath \"\"" Jul 11 00:26:33.124408 kubelet[2609]: I0711 00:26:33.124407 2609 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8ac4e62-8a44-4730-8b38-4b387684fc0f-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 11 00:26:33.124408 kubelet[2609]: I0711 00:26:33.124416 2609 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f8ac4e62-8a44-4730-8b38-4b387684fc0f-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 11 00:26:33.732853 kubelet[2609]: E0711 00:26:33.732462 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:33.929139 kubelet[2609]: I0711 00:26:33.929061 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxmmt\" (UniqueName: \"kubernetes.io/projected/41a7d22a-cf19-4c2e-ba17-84d4403c20eb-kube-api-access-xxmmt\") pod \"whisker-5cd7549d9c-fsbl2\" (UID: \"41a7d22a-cf19-4c2e-ba17-84d4403c20eb\") " pod="calico-system/whisker-5cd7549d9c-fsbl2" Jul 11 00:26:33.929974 kubelet[2609]: I0711 00:26:33.929912 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/41a7d22a-cf19-4c2e-ba17-84d4403c20eb-whisker-backend-key-pair\") pod \"whisker-5cd7549d9c-fsbl2\" (UID: \"41a7d22a-cf19-4c2e-ba17-84d4403c20eb\") " pod="calico-system/whisker-5cd7549d9c-fsbl2" Jul 11 00:26:33.930151 kubelet[2609]: I0711 00:26:33.930045 2609 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41a7d22a-cf19-4c2e-ba17-84d4403c20eb-whisker-ca-bundle\") pod \"whisker-5cd7549d9c-fsbl2\" (UID: \"41a7d22a-cf19-4c2e-ba17-84d4403c20eb\") " pod="calico-system/whisker-5cd7549d9c-fsbl2" Jul 11 00:26:34.086601 containerd[1542]: time="2025-07-11T00:26:34.086481511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cd7549d9c-fsbl2,Uid:41a7d22a-cf19-4c2e-ba17-84d4403c20eb,Namespace:calico-system,Attempt:0,}" Jul 11 00:26:34.099955 kernel: bpftool[4120]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 11 00:26:34.237153 systemd-networkd[1230]: cali32017b6b145: Link UP Jul 11 00:26:34.239467 systemd-networkd[1230]: cali32017b6b145: Gained carrier Jul 11 00:26:34.261415 containerd[1542]: 2025-07-11 00:26:34.154 [INFO][4122] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5cd7549d9c--fsbl2-eth0 whisker-5cd7549d9c- calico-system 41a7d22a-cf19-4c2e-ba17-84d4403c20eb 898 0 2025-07-11 00:26:33 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5cd7549d9c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5cd7549d9c-fsbl2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali32017b6b145 [] [] }} ContainerID="2bb2d02772c7ce5a86f33a69df12d071ca29a4ca084e6bfe48d1372ae680b7fe" Namespace="calico-system" Pod="whisker-5cd7549d9c-fsbl2" WorkloadEndpoint="localhost-k8s-whisker--5cd7549d9c--fsbl2-" Jul 11 00:26:34.261415 containerd[1542]: 2025-07-11 00:26:34.154 [INFO][4122] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2bb2d02772c7ce5a86f33a69df12d071ca29a4ca084e6bfe48d1372ae680b7fe" Namespace="calico-system" Pod="whisker-5cd7549d9c-fsbl2" WorkloadEndpoint="localhost-k8s-whisker--5cd7549d9c--fsbl2-eth0" Jul 11 00:26:34.261415 containerd[1542]: 2025-07-11 00:26:34.181 [INFO][4136] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2bb2d02772c7ce5a86f33a69df12d071ca29a4ca084e6bfe48d1372ae680b7fe" HandleID="k8s-pod-network.2bb2d02772c7ce5a86f33a69df12d071ca29a4ca084e6bfe48d1372ae680b7fe" Workload="localhost-k8s-whisker--5cd7549d9c--fsbl2-eth0" Jul 11 00:26:34.261415 containerd[1542]: 2025-07-11 00:26:34.182 [INFO][4136] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2bb2d02772c7ce5a86f33a69df12d071ca29a4ca084e6bfe48d1372ae680b7fe" HandleID="k8s-pod-network.2bb2d02772c7ce5a86f33a69df12d071ca29a4ca084e6bfe48d1372ae680b7fe" Workload="localhost-k8s-whisker--5cd7549d9c--fsbl2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001375b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5cd7549d9c-fsbl2", "timestamp":"2025-07-11 00:26:34.18194904 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:26:34.261415 containerd[1542]: 2025-07-11 00:26:34.182 [INFO][4136] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:26:34.261415 containerd[1542]: 2025-07-11 00:26:34.182 [INFO][4136] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:26:34.261415 containerd[1542]: 2025-07-11 00:26:34.182 [INFO][4136] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:26:34.261415 containerd[1542]: 2025-07-11 00:26:34.194 [INFO][4136] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2bb2d02772c7ce5a86f33a69df12d071ca29a4ca084e6bfe48d1372ae680b7fe" host="localhost" Jul 11 00:26:34.261415 containerd[1542]: 2025-07-11 00:26:34.199 [INFO][4136] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:26:34.261415 containerd[1542]: 2025-07-11 00:26:34.204 [INFO][4136] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:26:34.261415 containerd[1542]: 2025-07-11 00:26:34.206 [INFO][4136] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:26:34.261415 containerd[1542]: 2025-07-11 00:26:34.208 [INFO][4136] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:26:34.261415 containerd[1542]: 2025-07-11 00:26:34.208 [INFO][4136] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2bb2d02772c7ce5a86f33a69df12d071ca29a4ca084e6bfe48d1372ae680b7fe" host="localhost" Jul 11 00:26:34.261415 containerd[1542]: 2025-07-11 00:26:34.212 [INFO][4136] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2bb2d02772c7ce5a86f33a69df12d071ca29a4ca084e6bfe48d1372ae680b7fe Jul 11 00:26:34.261415 containerd[1542]: 2025-07-11 00:26:34.216 [INFO][4136] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2bb2d02772c7ce5a86f33a69df12d071ca29a4ca084e6bfe48d1372ae680b7fe" host="localhost" Jul 11 00:26:34.261415 containerd[1542]: 2025-07-11 00:26:34.223 [INFO][4136] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.2bb2d02772c7ce5a86f33a69df12d071ca29a4ca084e6bfe48d1372ae680b7fe" host="localhost" Jul 11 00:26:34.261415 containerd[1542]: 2025-07-11 00:26:34.223 [INFO][4136] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.2bb2d02772c7ce5a86f33a69df12d071ca29a4ca084e6bfe48d1372ae680b7fe" host="localhost" Jul 11 00:26:34.261415 containerd[1542]: 2025-07-11 00:26:34.223 [INFO][4136] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:26:34.261415 containerd[1542]: 2025-07-11 00:26:34.223 [INFO][4136] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="2bb2d02772c7ce5a86f33a69df12d071ca29a4ca084e6bfe48d1372ae680b7fe" HandleID="k8s-pod-network.2bb2d02772c7ce5a86f33a69df12d071ca29a4ca084e6bfe48d1372ae680b7fe" Workload="localhost-k8s-whisker--5cd7549d9c--fsbl2-eth0" Jul 11 00:26:34.262370 containerd[1542]: 2025-07-11 00:26:34.226 [INFO][4122] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2bb2d02772c7ce5a86f33a69df12d071ca29a4ca084e6bfe48d1372ae680b7fe" Namespace="calico-system" Pod="whisker-5cd7549d9c-fsbl2" WorkloadEndpoint="localhost-k8s-whisker--5cd7549d9c--fsbl2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5cd7549d9c--fsbl2-eth0", GenerateName:"whisker-5cd7549d9c-", Namespace:"calico-system", SelfLink:"", UID:"41a7d22a-cf19-4c2e-ba17-84d4403c20eb", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5cd7549d9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5cd7549d9c-fsbl2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali32017b6b145", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:26:34.262370 containerd[1542]: 2025-07-11 00:26:34.227 [INFO][4122] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="2bb2d02772c7ce5a86f33a69df12d071ca29a4ca084e6bfe48d1372ae680b7fe" Namespace="calico-system" Pod="whisker-5cd7549d9c-fsbl2" WorkloadEndpoint="localhost-k8s-whisker--5cd7549d9c--fsbl2-eth0" Jul 11 00:26:34.262370 containerd[1542]: 2025-07-11 00:26:34.227 [INFO][4122] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32017b6b145 ContainerID="2bb2d02772c7ce5a86f33a69df12d071ca29a4ca084e6bfe48d1372ae680b7fe" Namespace="calico-system" Pod="whisker-5cd7549d9c-fsbl2" WorkloadEndpoint="localhost-k8s-whisker--5cd7549d9c--fsbl2-eth0" Jul 11 00:26:34.262370 containerd[1542]: 2025-07-11 00:26:34.241 [INFO][4122] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2bb2d02772c7ce5a86f33a69df12d071ca29a4ca084e6bfe48d1372ae680b7fe" Namespace="calico-system" Pod="whisker-5cd7549d9c-fsbl2" WorkloadEndpoint="localhost-k8s-whisker--5cd7549d9c--fsbl2-eth0" Jul 11 00:26:34.262370 containerd[1542]: 2025-07-11 00:26:34.241 [INFO][4122] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2bb2d02772c7ce5a86f33a69df12d071ca29a4ca084e6bfe48d1372ae680b7fe" Namespace="calico-system" Pod="whisker-5cd7549d9c-fsbl2" WorkloadEndpoint="localhost-k8s-whisker--5cd7549d9c--fsbl2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5cd7549d9c--fsbl2-eth0", GenerateName:"whisker-5cd7549d9c-", Namespace:"calico-system", SelfLink:"", UID:"41a7d22a-cf19-4c2e-ba17-84d4403c20eb", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5cd7549d9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2bb2d02772c7ce5a86f33a69df12d071ca29a4ca084e6bfe48d1372ae680b7fe", Pod:"whisker-5cd7549d9c-fsbl2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali32017b6b145", MAC:"32:64:f1:ed:1d:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:26:34.262370 containerd[1542]: 2025-07-11 00:26:34.257 [INFO][4122] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2bb2d02772c7ce5a86f33a69df12d071ca29a4ca084e6bfe48d1372ae680b7fe" Namespace="calico-system" Pod="whisker-5cd7549d9c-fsbl2" WorkloadEndpoint="localhost-k8s-whisker--5cd7549d9c--fsbl2-eth0" Jul 11 00:26:34.279874 containerd[1542]: time="2025-07-11T00:26:34.279735043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:26:34.280546 containerd[1542]: time="2025-07-11T00:26:34.280490961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:26:34.280676 containerd[1542]: time="2025-07-11T00:26:34.280628201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:34.280888 containerd[1542]: time="2025-07-11T00:26:34.280805361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:34.284641 systemd-networkd[1230]: vxlan.calico: Link UP Jul 11 00:26:34.284744 systemd-networkd[1230]: vxlan.calico: Gained carrier Jul 11 00:26:34.320075 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:26:34.341399 containerd[1542]: time="2025-07-11T00:26:34.341255974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cd7549d9c-fsbl2,Uid:41a7d22a-cf19-4c2e-ba17-84d4403c20eb,Namespace:calico-system,Attempt:0,} returns sandbox id \"2bb2d02772c7ce5a86f33a69df12d071ca29a4ca084e6bfe48d1372ae680b7fe\"" Jul 11 00:26:34.344460 containerd[1542]: time="2025-07-11T00:26:34.344385447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 11 00:26:34.603182 kubelet[2609]: I0711 00:26:34.603094 2609 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8ac4e62-8a44-4730-8b38-4b387684fc0f" path="/var/lib/kubelet/pods/f8ac4e62-8a44-4730-8b38-4b387684fc0f/volumes" Jul 11 00:26:35.410581 containerd[1542]: time="2025-07-11T00:26:35.410523568Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:35.410581 containerd[1542]: time="2025-07-11T00:26:35.412019045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 11 00:26:35.413007 containerd[1542]: time="2025-07-11T00:26:35.412958963Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:35.414758 containerd[1542]: time="2025-07-11T00:26:35.414715159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:35.415835 containerd[1542]: time="2025-07-11T00:26:35.415783636Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.071367669s" Jul 11 00:26:35.415879 containerd[1542]: time="2025-07-11T00:26:35.415842036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 11 00:26:35.417740 containerd[1542]: time="2025-07-11T00:26:35.417712352Z" level=info msg="CreateContainer within sandbox \"2bb2d02772c7ce5a86f33a69df12d071ca29a4ca084e6bfe48d1372ae680b7fe\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 11 00:26:35.427584 containerd[1542]: time="2025-07-11T00:26:35.427462570Z" level=info msg="CreateContainer within sandbox \"2bb2d02772c7ce5a86f33a69df12d071ca29a4ca084e6bfe48d1372ae680b7fe\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"b32c4b0688f234f2dc47fd314ed4178f11d78e8ed9f21c04f1ff012b0d0f8fc9\"" Jul 11 00:26:35.428320 containerd[1542]: time="2025-07-11T00:26:35.428289128Z" level=info msg="StartContainer for \"b32c4b0688f234f2dc47fd314ed4178f11d78e8ed9f21c04f1ff012b0d0f8fc9\"" Jul 11 00:26:35.441018 systemd-networkd[1230]: cali32017b6b145: Gained IPv6LL Jul 11 00:26:35.483963 containerd[1542]: time="2025-07-11T00:26:35.483913362Z" level=info msg="StartContainer for \"b32c4b0688f234f2dc47fd314ed4178f11d78e8ed9f21c04f1ff012b0d0f8fc9\" returns successfully" Jul 11 00:26:35.485143 containerd[1542]: time="2025-07-11T00:26:35.484955719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 11 00:26:35.505313 systemd-networkd[1230]: vxlan.calico: Gained IPv6LL Jul 11 00:26:37.035922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount827072190.mount: Deactivated successfully. Jul 11 00:26:37.050090 containerd[1542]: time="2025-07-11T00:26:37.049517644Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:37.055777 containerd[1542]: time="2025-07-11T00:26:37.055741311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 11 00:26:37.056746 containerd[1542]: time="2025-07-11T00:26:37.056717709Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:37.059944 containerd[1542]: time="2025-07-11T00:26:37.059901623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:37.060931 containerd[1542]: time="2025-07-11T00:26:37.060901341Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.575912382s" Jul 11 00:26:37.061049 containerd[1542]: time="2025-07-11T00:26:37.061033421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 11 00:26:37.063745 containerd[1542]: time="2025-07-11T00:26:37.063355816Z" level=info msg="CreateContainer within sandbox \"2bb2d02772c7ce5a86f33a69df12d071ca29a4ca084e6bfe48d1372ae680b7fe\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 11 00:26:37.078584 containerd[1542]: time="2025-07-11T00:26:37.078513066Z" level=info msg="CreateContainer within sandbox \"2bb2d02772c7ce5a86f33a69df12d071ca29a4ca084e6bfe48d1372ae680b7fe\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"2984c1495f9ed3a3ee1fbffad3e5c8f46936af9c89aa8682e3471718f41c517e\"" Jul 11 00:26:37.079340 containerd[1542]: time="2025-07-11T00:26:37.079140625Z" level=info msg="StartContainer for \"2984c1495f9ed3a3ee1fbffad3e5c8f46936af9c89aa8682e3471718f41c517e\"" Jul 11 00:26:37.145617 containerd[1542]: time="2025-07-11T00:26:37.144205775Z" level=info msg="StartContainer for \"2984c1495f9ed3a3ee1fbffad3e5c8f46936af9c89aa8682e3471718f41c517e\" returns successfully" Jul 11 00:26:37.756903 kubelet[2609]: I0711 00:26:37.755247 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5cd7549d9c-fsbl2" podStartSLOduration=2.036821786 podStartE2EDuration="4.755230196s" podCreationTimestamp="2025-07-11 00:26:33 +0000 UTC" firstStartedPulling="2025-07-11 00:26:34.343385089 +0000 UTC m=+35.832942707" lastFinishedPulling="2025-07-11 00:26:37.061793499 +0000 UTC m=+38.551351117" observedRunningTime="2025-07-11 00:26:37.753814279 +0000 UTC m=+39.243371897" watchObservedRunningTime="2025-07-11 00:26:37.755230196 +0000 UTC m=+39.244787814" Jul 11 00:26:38.602266 containerd[1542]: time="2025-07-11T00:26:38.602190942Z" level=info msg="StopPodSandbox for \"a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866\"" Jul 11 00:26:38.683073 containerd[1542]: 2025-07-11 00:26:38.646 [INFO][4395] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" Jul 11 00:26:38.683073 containerd[1542]: 2025-07-11 00:26:38.646 [INFO][4395] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" iface="eth0" netns="/var/run/netns/cni-46e56699-68b4-93ae-6082-dbba2e00e0eb" Jul 11 00:26:38.683073 containerd[1542]: 2025-07-11 00:26:38.646 [INFO][4395] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" iface="eth0" netns="/var/run/netns/cni-46e56699-68b4-93ae-6082-dbba2e00e0eb" Jul 11 00:26:38.683073 containerd[1542]: 2025-07-11 00:26:38.646 [INFO][4395] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" iface="eth0" netns="/var/run/netns/cni-46e56699-68b4-93ae-6082-dbba2e00e0eb" Jul 11 00:26:38.683073 containerd[1542]: 2025-07-11 00:26:38.646 [INFO][4395] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" Jul 11 00:26:38.683073 containerd[1542]: 2025-07-11 00:26:38.646 [INFO][4395] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" Jul 11 00:26:38.683073 containerd[1542]: 2025-07-11 00:26:38.666 [INFO][4404] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" HandleID="k8s-pod-network.a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" Workload="localhost-k8s-calico--kube--controllers--6bbc6b4cc--xqcp5-eth0" Jul 11 00:26:38.683073 containerd[1542]: 2025-07-11 00:26:38.666 [INFO][4404] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:26:38.683073 containerd[1542]: 2025-07-11 00:26:38.667 [INFO][4404] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:26:38.683073 containerd[1542]: 2025-07-11 00:26:38.675 [WARNING][4404] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" HandleID="k8s-pod-network.a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" Workload="localhost-k8s-calico--kube--controllers--6bbc6b4cc--xqcp5-eth0" Jul 11 00:26:38.683073 containerd[1542]: 2025-07-11 00:26:38.675 [INFO][4404] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" HandleID="k8s-pod-network.a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" Workload="localhost-k8s-calico--kube--controllers--6bbc6b4cc--xqcp5-eth0" Jul 11 00:26:38.683073 containerd[1542]: 2025-07-11 00:26:38.677 [INFO][4404] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:26:38.683073 containerd[1542]: 2025-07-11 00:26:38.679 [INFO][4395] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" Jul 11 00:26:38.683850 containerd[1542]: time="2025-07-11T00:26:38.683667710Z" level=info msg="TearDown network for sandbox \"a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866\" successfully" Jul 11 00:26:38.683850 containerd[1542]: time="2025-07-11T00:26:38.683697389Z" level=info msg="StopPodSandbox for \"a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866\" returns successfully" Jul 11 00:26:38.684593 containerd[1542]: time="2025-07-11T00:26:38.684496748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bbc6b4cc-xqcp5,Uid:f8b0240e-8f74-491a-84c4-c496b1ecf4cc,Namespace:calico-system,Attempt:1,}" Jul 11 00:26:38.686450 systemd[1]: run-netns-cni\x2d46e56699\x2d68b4\x2d93ae\x2d6082\x2ddbba2e00e0eb.mount: Deactivated successfully. Jul 11 00:26:38.834782 systemd-networkd[1230]: cali3e7b680bebf: Link UP Jul 11 00:26:38.836189 systemd-networkd[1230]: cali3e7b680bebf: Gained carrier Jul 11 00:26:38.852961 containerd[1542]: 2025-07-11 00:26:38.774 [INFO][4412] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6bbc6b4cc--xqcp5-eth0 calico-kube-controllers-6bbc6b4cc- calico-system f8b0240e-8f74-491a-84c4-c496b1ecf4cc 929 0 2025-07-11 00:26:20 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6bbc6b4cc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6bbc6b4cc-xqcp5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3e7b680bebf [] [] }} ContainerID="2efa2ed076731e77aa84c26de149c5ca2a35b970d361b86318c0a023a90c4c3c" Namespace="calico-system" Pod="calico-kube-controllers-6bbc6b4cc-xqcp5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bbc6b4cc--xqcp5-" Jul 11 00:26:38.852961 containerd[1542]: 2025-07-11 00:26:38.775 [INFO][4412] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2efa2ed076731e77aa84c26de149c5ca2a35b970d361b86318c0a023a90c4c3c" Namespace="calico-system" Pod="calico-kube-controllers-6bbc6b4cc-xqcp5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bbc6b4cc--xqcp5-eth0" Jul 11 00:26:38.852961 containerd[1542]: 2025-07-11 00:26:38.799 [INFO][4427] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2efa2ed076731e77aa84c26de149c5ca2a35b970d361b86318c0a023a90c4c3c" HandleID="k8s-pod-network.2efa2ed076731e77aa84c26de149c5ca2a35b970d361b86318c0a023a90c4c3c" Workload="localhost-k8s-calico--kube--controllers--6bbc6b4cc--xqcp5-eth0" Jul 11 00:26:38.852961 containerd[1542]: 2025-07-11 00:26:38.799 [INFO][4427] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2efa2ed076731e77aa84c26de149c5ca2a35b970d361b86318c0a023a90c4c3c" HandleID="k8s-pod-network.2efa2ed076731e77aa84c26de149c5ca2a35b970d361b86318c0a023a90c4c3c" Workload="localhost-k8s-calico--kube--controllers--6bbc6b4cc--xqcp5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400050eb20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6bbc6b4cc-xqcp5", "timestamp":"2025-07-11 00:26:38.799723213 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:26:38.852961 containerd[1542]: 2025-07-11 00:26:38.799 [INFO][4427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:26:38.852961 containerd[1542]: 2025-07-11 00:26:38.799 [INFO][4427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:26:38.852961 containerd[1542]: 2025-07-11 00:26:38.799 [INFO][4427] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:26:38.852961 containerd[1542]: 2025-07-11 00:26:38.808 [INFO][4427] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2efa2ed076731e77aa84c26de149c5ca2a35b970d361b86318c0a023a90c4c3c" host="localhost" Jul 11 00:26:38.852961 containerd[1542]: 2025-07-11 00:26:38.812 [INFO][4427] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:26:38.852961 containerd[1542]: 2025-07-11 00:26:38.817 [INFO][4427] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:26:38.852961 containerd[1542]: 2025-07-11 00:26:38.819 [INFO][4427] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:26:38.852961 containerd[1542]: 2025-07-11 00:26:38.821 [INFO][4427] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:26:38.852961 containerd[1542]: 2025-07-11 00:26:38.821 [INFO][4427] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2efa2ed076731e77aa84c26de149c5ca2a35b970d361b86318c0a023a90c4c3c" host="localhost" Jul 11 00:26:38.852961 containerd[1542]: 2025-07-11 00:26:38.823 [INFO][4427] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2efa2ed076731e77aa84c26de149c5ca2a35b970d361b86318c0a023a90c4c3c Jul 11 00:26:38.852961 containerd[1542]: 2025-07-11 00:26:38.826 [INFO][4427] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2efa2ed076731e77aa84c26de149c5ca2a35b970d361b86318c0a023a90c4c3c" host="localhost" Jul 11 00:26:38.852961 containerd[1542]: 2025-07-11 00:26:38.831 [INFO][4427] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.2efa2ed076731e77aa84c26de149c5ca2a35b970d361b86318c0a023a90c4c3c" host="localhost" Jul 11 00:26:38.852961 containerd[1542]: 2025-07-11 00:26:38.831 [INFO][4427] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.2efa2ed076731e77aa84c26de149c5ca2a35b970d361b86318c0a023a90c4c3c" host="localhost" Jul 11 00:26:38.852961 containerd[1542]: 2025-07-11 00:26:38.831 [INFO][4427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:26:38.852961 containerd[1542]: 2025-07-11 00:26:38.831 [INFO][4427] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="2efa2ed076731e77aa84c26de149c5ca2a35b970d361b86318c0a023a90c4c3c" HandleID="k8s-pod-network.2efa2ed076731e77aa84c26de149c5ca2a35b970d361b86318c0a023a90c4c3c" Workload="localhost-k8s-calico--kube--controllers--6bbc6b4cc--xqcp5-eth0" Jul 11 00:26:38.853479 containerd[1542]: 2025-07-11 00:26:38.833 [INFO][4412] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2efa2ed076731e77aa84c26de149c5ca2a35b970d361b86318c0a023a90c4c3c" Namespace="calico-system" Pod="calico-kube-controllers-6bbc6b4cc-xqcp5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bbc6b4cc--xqcp5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bbc6b4cc--xqcp5-eth0", GenerateName:"calico-kube-controllers-6bbc6b4cc-", Namespace:"calico-system", SelfLink:"", UID:"f8b0240e-8f74-491a-84c4-c496b1ecf4cc", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bbc6b4cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6bbc6b4cc-xqcp5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3e7b680bebf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:26:38.853479 containerd[1542]: 2025-07-11 00:26:38.833 [INFO][4412] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="2efa2ed076731e77aa84c26de149c5ca2a35b970d361b86318c0a023a90c4c3c" Namespace="calico-system" Pod="calico-kube-controllers-6bbc6b4cc-xqcp5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bbc6b4cc--xqcp5-eth0" Jul 11 00:26:38.853479 containerd[1542]: 2025-07-11 00:26:38.833 [INFO][4412] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3e7b680bebf ContainerID="2efa2ed076731e77aa84c26de149c5ca2a35b970d361b86318c0a023a90c4c3c" Namespace="calico-system" Pod="calico-kube-controllers-6bbc6b4cc-xqcp5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bbc6b4cc--xqcp5-eth0" Jul 11 00:26:38.853479 containerd[1542]: 2025-07-11 00:26:38.835 [INFO][4412] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2efa2ed076731e77aa84c26de149c5ca2a35b970d361b86318c0a023a90c4c3c" Namespace="calico-system" Pod="calico-kube-controllers-6bbc6b4cc-xqcp5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bbc6b4cc--xqcp5-eth0" Jul 11 00:26:38.853479 containerd[1542]: 2025-07-11 00:26:38.836 [INFO][4412] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2efa2ed076731e77aa84c26de149c5ca2a35b970d361b86318c0a023a90c4c3c" Namespace="calico-system" Pod="calico-kube-controllers-6bbc6b4cc-xqcp5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bbc6b4cc--xqcp5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bbc6b4cc--xqcp5-eth0", GenerateName:"calico-kube-controllers-6bbc6b4cc-", Namespace:"calico-system", SelfLink:"", UID:"f8b0240e-8f74-491a-84c4-c496b1ecf4cc", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bbc6b4cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2efa2ed076731e77aa84c26de149c5ca2a35b970d361b86318c0a023a90c4c3c", Pod:"calico-kube-controllers-6bbc6b4cc-xqcp5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3e7b680bebf", MAC:"da:0e:55:ee:14:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:26:38.853479 containerd[1542]: 2025-07-11 00:26:38.850 [INFO][4412] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2efa2ed076731e77aa84c26de149c5ca2a35b970d361b86318c0a023a90c4c3c" Namespace="calico-system" Pod="calico-kube-controllers-6bbc6b4cc-xqcp5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bbc6b4cc--xqcp5-eth0" Jul 11 00:26:38.867496 containerd[1542]: time="2025-07-11T00:26:38.867396646Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:26:38.867496 containerd[1542]: time="2025-07-11T00:26:38.867466886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:26:38.867640 containerd[1542]: time="2025-07-11T00:26:38.867494606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:38.867640 containerd[1542]: time="2025-07-11T00:26:38.867600726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:38.897342 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:26:38.927486 containerd[1542]: time="2025-07-11T00:26:38.927443774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bbc6b4cc-xqcp5,Uid:f8b0240e-8f74-491a-84c4-c496b1ecf4cc,Namespace:calico-system,Attempt:1,} returns sandbox id \"2efa2ed076731e77aa84c26de149c5ca2a35b970d361b86318c0a023a90c4c3c\"" Jul 11 00:26:38.928796 containerd[1542]: time="2025-07-11T00:26:38.928772931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 11 00:26:39.601277 containerd[1542]: time="2025-07-11T00:26:39.601228184Z" level=info msg="StopPodSandbox for \"8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241\"" Jul 11 00:26:39.692794 containerd[1542]: 2025-07-11 00:26:39.661 [INFO][4503] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" Jul 11 00:26:39.692794 containerd[1542]: 2025-07-11 00:26:39.661 [INFO][4503] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" iface="eth0" netns="/var/run/netns/cni-a7f6190a-06d5-0319-ceda-b9776c104cc2" Jul 11 00:26:39.692794 containerd[1542]: 2025-07-11 00:26:39.661 [INFO][4503] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" iface="eth0" netns="/var/run/netns/cni-a7f6190a-06d5-0319-ceda-b9776c104cc2" Jul 11 00:26:39.692794 containerd[1542]: 2025-07-11 00:26:39.662 [INFO][4503] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" iface="eth0" netns="/var/run/netns/cni-a7f6190a-06d5-0319-ceda-b9776c104cc2" Jul 11 00:26:39.692794 containerd[1542]: 2025-07-11 00:26:39.662 [INFO][4503] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" Jul 11 00:26:39.692794 containerd[1542]: 2025-07-11 00:26:39.662 [INFO][4503] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" Jul 11 00:26:39.692794 containerd[1542]: 2025-07-11 00:26:39.679 [INFO][4512] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" HandleID="k8s-pod-network.8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" Workload="localhost-k8s-goldmane--58fd7646b9--4smck-eth0" Jul 11 00:26:39.692794 containerd[1542]: 2025-07-11 00:26:39.679 [INFO][4512] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:26:39.692794 containerd[1542]: 2025-07-11 00:26:39.679 [INFO][4512] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:26:39.692794 containerd[1542]: 2025-07-11 00:26:39.688 [WARNING][4512] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" HandleID="k8s-pod-network.8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" Workload="localhost-k8s-goldmane--58fd7646b9--4smck-eth0" Jul 11 00:26:39.692794 containerd[1542]: 2025-07-11 00:26:39.688 [INFO][4512] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" HandleID="k8s-pod-network.8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" Workload="localhost-k8s-goldmane--58fd7646b9--4smck-eth0" Jul 11 00:26:39.692794 containerd[1542]: 2025-07-11 00:26:39.689 [INFO][4512] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:26:39.692794 containerd[1542]: 2025-07-11 00:26:39.691 [INFO][4503] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" Jul 11 00:26:39.693738 containerd[1542]: time="2025-07-11T00:26:39.693618022Z" level=info msg="TearDown network for sandbox \"8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241\" successfully" Jul 11 00:26:39.693738 containerd[1542]: time="2025-07-11T00:26:39.693648102Z" level=info msg="StopPodSandbox for \"8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241\" returns successfully" Jul 11 00:26:39.694990 containerd[1542]: time="2025-07-11T00:26:39.694940580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-4smck,Uid:2771ca54-4f98-4224-98e6-fa1c41a6b452,Namespace:calico-system,Attempt:1,}" Jul 11 00:26:39.695860 systemd[1]: run-netns-cni\x2da7f6190a\x2d06d5\x2d0319\x2dceda\x2db9776c104cc2.mount: Deactivated successfully. Jul 11 00:26:39.861766 systemd-networkd[1230]: cali990a5d7ad6e: Link UP Jul 11 00:26:39.862142 systemd-networkd[1230]: cali990a5d7ad6e: Gained carrier Jul 11 00:26:39.878939 containerd[1542]: 2025-07-11 00:26:39.799 [INFO][4521] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--4smck-eth0 goldmane-58fd7646b9- calico-system 2771ca54-4f98-4224-98e6-fa1c41a6b452 938 0 2025-07-11 00:26:19 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-4smck eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali990a5d7ad6e [] [] }} ContainerID="3072e21ae63f85650b74cef603633a232e3601786257e6edb73350d02b1fdb4c" Namespace="calico-system" Pod="goldmane-58fd7646b9-4smck" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--4smck-" Jul 11 00:26:39.878939 containerd[1542]: 2025-07-11 00:26:39.799 [INFO][4521] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3072e21ae63f85650b74cef603633a232e3601786257e6edb73350d02b1fdb4c" Namespace="calico-system" Pod="goldmane-58fd7646b9-4smck" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--4smck-eth0" Jul 11 00:26:39.878939 containerd[1542]: 2025-07-11 00:26:39.822 [INFO][4535] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3072e21ae63f85650b74cef603633a232e3601786257e6edb73350d02b1fdb4c" HandleID="k8s-pod-network.3072e21ae63f85650b74cef603633a232e3601786257e6edb73350d02b1fdb4c" Workload="localhost-k8s-goldmane--58fd7646b9--4smck-eth0" Jul 11 00:26:39.878939 containerd[1542]: 2025-07-11 00:26:39.822 [INFO][4535] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3072e21ae63f85650b74cef603633a232e3601786257e6edb73350d02b1fdb4c" HandleID="k8s-pod-network.3072e21ae63f85650b74cef603633a232e3601786257e6edb73350d02b1fdb4c" Workload="localhost-k8s-goldmane--58fd7646b9--4smck-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-4smck", "timestamp":"2025-07-11 00:26:39.822652756 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:26:39.878939 containerd[1542]: 2025-07-11 00:26:39.822 [INFO][4535] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:26:39.878939 containerd[1542]: 2025-07-11 00:26:39.823 [INFO][4535] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:26:39.878939 containerd[1542]: 2025-07-11 00:26:39.823 [INFO][4535] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:26:39.878939 containerd[1542]: 2025-07-11 00:26:39.832 [INFO][4535] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3072e21ae63f85650b74cef603633a232e3601786257e6edb73350d02b1fdb4c" host="localhost" Jul 11 00:26:39.878939 containerd[1542]: 2025-07-11 00:26:39.836 [INFO][4535] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:26:39.878939 containerd[1542]: 2025-07-11 00:26:39.840 [INFO][4535] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:26:39.878939 containerd[1542]: 2025-07-11 00:26:39.842 [INFO][4535] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:26:39.878939 containerd[1542]: 2025-07-11 00:26:39.844 [INFO][4535] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:26:39.878939 containerd[1542]: 2025-07-11 00:26:39.844 [INFO][4535] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3072e21ae63f85650b74cef603633a232e3601786257e6edb73350d02b1fdb4c" host="localhost" Jul 11 00:26:39.878939 containerd[1542]: 2025-07-11 00:26:39.845 [INFO][4535] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3072e21ae63f85650b74cef603633a232e3601786257e6edb73350d02b1fdb4c Jul 11 00:26:39.878939 containerd[1542]: 2025-07-11 00:26:39.849 [INFO][4535] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3072e21ae63f85650b74cef603633a232e3601786257e6edb73350d02b1fdb4c" host="localhost" Jul 11 00:26:39.878939 containerd[1542]: 2025-07-11 00:26:39.855 [INFO][4535] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3072e21ae63f85650b74cef603633a232e3601786257e6edb73350d02b1fdb4c" host="localhost" Jul 11 00:26:39.878939 containerd[1542]: 2025-07-11 00:26:39.855 [INFO][4535] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3072e21ae63f85650b74cef603633a232e3601786257e6edb73350d02b1fdb4c" host="localhost" Jul 11 00:26:39.878939 containerd[1542]: 2025-07-11 00:26:39.855 [INFO][4535] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:26:39.878939 containerd[1542]: 2025-07-11 00:26:39.855 [INFO][4535] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3072e21ae63f85650b74cef603633a232e3601786257e6edb73350d02b1fdb4c" HandleID="k8s-pod-network.3072e21ae63f85650b74cef603633a232e3601786257e6edb73350d02b1fdb4c" Workload="localhost-k8s-goldmane--58fd7646b9--4smck-eth0" Jul 11 00:26:39.879479 containerd[1542]: 2025-07-11 00:26:39.858 [INFO][4521] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3072e21ae63f85650b74cef603633a232e3601786257e6edb73350d02b1fdb4c" Namespace="calico-system" Pod="goldmane-58fd7646b9-4smck" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--4smck-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--4smck-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"2771ca54-4f98-4224-98e6-fa1c41a6b452", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-4smck", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali990a5d7ad6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:26:39.879479 containerd[1542]: 2025-07-11 00:26:39.858 [INFO][4521] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3072e21ae63f85650b74cef603633a232e3601786257e6edb73350d02b1fdb4c" Namespace="calico-system" Pod="goldmane-58fd7646b9-4smck" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--4smck-eth0" Jul 11 00:26:39.879479 containerd[1542]: 2025-07-11 00:26:39.858 [INFO][4521] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali990a5d7ad6e ContainerID="3072e21ae63f85650b74cef603633a232e3601786257e6edb73350d02b1fdb4c" Namespace="calico-system" Pod="goldmane-58fd7646b9-4smck" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--4smck-eth0" Jul 11 00:26:39.879479 containerd[1542]: 2025-07-11 00:26:39.860 [INFO][4521] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3072e21ae63f85650b74cef603633a232e3601786257e6edb73350d02b1fdb4c" Namespace="calico-system" Pod="goldmane-58fd7646b9-4smck" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--4smck-eth0" Jul 11 00:26:39.879479 containerd[1542]: 2025-07-11 00:26:39.861 [INFO][4521] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3072e21ae63f85650b74cef603633a232e3601786257e6edb73350d02b1fdb4c" Namespace="calico-system" Pod="goldmane-58fd7646b9-4smck" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--4smck-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--4smck-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"2771ca54-4f98-4224-98e6-fa1c41a6b452", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3072e21ae63f85650b74cef603633a232e3601786257e6edb73350d02b1fdb4c", Pod:"goldmane-58fd7646b9-4smck", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali990a5d7ad6e", MAC:"26:fb:e0:0b:5f:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:26:39.879479 containerd[1542]: 2025-07-11 00:26:39.872 [INFO][4521] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3072e21ae63f85650b74cef603633a232e3601786257e6edb73350d02b1fdb4c" Namespace="calico-system" Pod="goldmane-58fd7646b9-4smck" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--4smck-eth0" Jul 11 00:26:39.898915 containerd[1542]: time="2025-07-11T00:26:39.898816502Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:26:39.898915 containerd[1542]: time="2025-07-11T00:26:39.898892422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:26:39.898915 containerd[1542]: time="2025-07-11T00:26:39.898903862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:39.899160 containerd[1542]: time="2025-07-11T00:26:39.898985622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:39.926190 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:26:39.952486 containerd[1542]: time="2025-07-11T00:26:39.952422248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-4smck,Uid:2771ca54-4f98-4224-98e6-fa1c41a6b452,Namespace:calico-system,Attempt:1,} returns sandbox id \"3072e21ae63f85650b74cef603633a232e3601786257e6edb73350d02b1fdb4c\"" Jul 11 00:26:40.510100 systemd[1]: Started sshd@7-10.0.0.118:22-10.0.0.1:58120.service - OpenSSH per-connection server daemon (10.0.0.1:58120). Jul 11 00:26:40.552848 sshd[4601]: Accepted publickey for core from 10.0.0.1 port 58120 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:26:40.554917 sshd[4601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:26:40.561700 systemd-logind[1524]: New session 8 of user core. Jul 11 00:26:40.568111 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 11 00:26:40.624308 systemd-networkd[1230]: cali3e7b680bebf: Gained IPv6LL Jul 11 00:26:40.804901 containerd[1542]: time="2025-07-11T00:26:40.804784802Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:40.807191 containerd[1542]: time="2025-07-11T00:26:40.805712121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 11 00:26:40.807191 containerd[1542]: time="2025-07-11T00:26:40.806715639Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:40.808895 containerd[1542]: time="2025-07-11T00:26:40.808864836Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:40.811241 containerd[1542]: time="2025-07-11T00:26:40.811211312Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 1.882404741s" Jul 11 00:26:40.811337 containerd[1542]: time="2025-07-11T00:26:40.811244632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 11 00:26:40.812391 containerd[1542]: time="2025-07-11T00:26:40.812368030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 11 00:26:40.822051 containerd[1542]: time="2025-07-11T00:26:40.822007174Z" level=info msg="CreateContainer within sandbox \"2efa2ed076731e77aa84c26de149c5ca2a35b970d361b86318c0a023a90c4c3c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 11 00:26:40.831241 containerd[1542]: time="2025-07-11T00:26:40.831097679Z" level=info msg="CreateContainer within sandbox \"2efa2ed076731e77aa84c26de149c5ca2a35b970d361b86318c0a023a90c4c3c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"63609890324df2ea4d6e60dd33518edb816b806d0239767f5ffeac972b152329\"" Jul 11 00:26:40.831685 containerd[1542]: time="2025-07-11T00:26:40.831651918Z" level=info msg="StartContainer for \"63609890324df2ea4d6e60dd33518edb816b806d0239767f5ffeac972b152329\"" Jul 11 00:26:40.848229 sshd[4601]: pam_unix(sshd:session): session closed for user core Jul 11 00:26:40.852916 systemd-logind[1524]: Session 8 logged out. Waiting for processes to exit. Jul 11 00:26:40.854328 systemd[1]: sshd@7-10.0.0.118:22-10.0.0.1:58120.service: Deactivated successfully. Jul 11 00:26:40.878558 systemd[1]: session-8.scope: Deactivated successfully. Jul 11 00:26:40.881059 systemd-logind[1524]: Removed session 8. Jul 11 00:26:40.910050 containerd[1542]: time="2025-07-11T00:26:40.909989109Z" level=info msg="StartContainer for \"63609890324df2ea4d6e60dd33518edb816b806d0239767f5ffeac972b152329\" returns successfully" Jul 11 00:26:41.601716 containerd[1542]: time="2025-07-11T00:26:41.601204635Z" level=info msg="StopPodSandbox for \"4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301\"" Jul 11 00:26:41.601716 containerd[1542]: time="2025-07-11T00:26:41.601556275Z" level=info msg="StopPodSandbox for \"ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084\"" Jul 11 00:26:41.603660 containerd[1542]: time="2025-07-11T00:26:41.603628351Z" level=info msg="StopPodSandbox for \"6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3\"" Jul 11 00:26:41.703368 containerd[1542]: 2025-07-11 00:26:41.663 [INFO][4706] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" Jul 11 00:26:41.703368 containerd[1542]: 2025-07-11 00:26:41.664 [INFO][4706] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" iface="eth0" netns="/var/run/netns/cni-ef2c6bfa-4fab-dc79-d9ac-d3000ce6264b" Jul 11 00:26:41.703368 containerd[1542]: 2025-07-11 00:26:41.665 [INFO][4706] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" iface="eth0" netns="/var/run/netns/cni-ef2c6bfa-4fab-dc79-d9ac-d3000ce6264b" Jul 11 00:26:41.703368 containerd[1542]: 2025-07-11 00:26:41.665 [INFO][4706] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" iface="eth0" netns="/var/run/netns/cni-ef2c6bfa-4fab-dc79-d9ac-d3000ce6264b" Jul 11 00:26:41.703368 containerd[1542]: 2025-07-11 00:26:41.665 [INFO][4706] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" Jul 11 00:26:41.703368 containerd[1542]: 2025-07-11 00:26:41.665 [INFO][4706] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" Jul 11 00:26:41.703368 containerd[1542]: 2025-07-11 00:26:41.687 [INFO][4729] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" HandleID="k8s-pod-network.4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" Workload="localhost-k8s-coredns--7c65d6cfc9--677wx-eth0" Jul 11 00:26:41.703368 containerd[1542]: 2025-07-11 00:26:41.688 [INFO][4729] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:26:41.703368 containerd[1542]: 2025-07-11 00:26:41.688 [INFO][4729] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:26:41.703368 containerd[1542]: 2025-07-11 00:26:41.696 [WARNING][4729] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" HandleID="k8s-pod-network.4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" Workload="localhost-k8s-coredns--7c65d6cfc9--677wx-eth0" Jul 11 00:26:41.703368 containerd[1542]: 2025-07-11 00:26:41.697 [INFO][4729] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" HandleID="k8s-pod-network.4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" Workload="localhost-k8s-coredns--7c65d6cfc9--677wx-eth0" Jul 11 00:26:41.703368 containerd[1542]: 2025-07-11 00:26:41.698 [INFO][4729] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:26:41.703368 containerd[1542]: 2025-07-11 00:26:41.701 [INFO][4706] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" Jul 11 00:26:41.707067 containerd[1542]: time="2025-07-11T00:26:41.703486038Z" level=info msg="TearDown network for sandbox \"4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301\" successfully" Jul 11 00:26:41.707067 containerd[1542]: time="2025-07-11T00:26:41.703514757Z" level=info msg="StopPodSandbox for \"4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301\" returns successfully" Jul 11 00:26:41.707538 kubelet[2609]: E0711 00:26:41.706499 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:41.706066 systemd[1]: run-netns-cni\x2def2c6bfa\x2d4fab\x2ddc79\x2dd9ac\x2dd3000ce6264b.mount: Deactivated successfully. Jul 11 00:26:41.709527 containerd[1542]: time="2025-07-11T00:26:41.709157549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-677wx,Uid:68360d6b-1341-4cc8-9b2e-e1fda0c521fc,Namespace:kube-system,Attempt:1,}" Jul 11 00:26:41.738963 containerd[1542]: 2025-07-11 00:26:41.680 [INFO][4705] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" Jul 11 00:26:41.738963 containerd[1542]: 2025-07-11 00:26:41.681 [INFO][4705] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" iface="eth0" netns="/var/run/netns/cni-831a50e7-b978-22ce-65bb-10c9d6665bd0" Jul 11 00:26:41.738963 containerd[1542]: 2025-07-11 00:26:41.681 [INFO][4705] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" iface="eth0" netns="/var/run/netns/cni-831a50e7-b978-22ce-65bb-10c9d6665bd0" Jul 11 00:26:41.738963 containerd[1542]: 2025-07-11 00:26:41.683 [INFO][4705] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" iface="eth0" netns="/var/run/netns/cni-831a50e7-b978-22ce-65bb-10c9d6665bd0" Jul 11 00:26:41.738963 containerd[1542]: 2025-07-11 00:26:41.683 [INFO][4705] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" Jul 11 00:26:41.738963 containerd[1542]: 2025-07-11 00:26:41.683 [INFO][4705] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" Jul 11 00:26:41.738963 containerd[1542]: 2025-07-11 00:26:41.709 [INFO][4739] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" HandleID="k8s-pod-network.ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" Workload="localhost-k8s-coredns--7c65d6cfc9--qjjwl-eth0" Jul 11 00:26:41.738963 containerd[1542]: 2025-07-11 00:26:41.709 [INFO][4739] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:26:41.738963 containerd[1542]: 2025-07-11 00:26:41.710 [INFO][4739] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:26:41.738963 containerd[1542]: 2025-07-11 00:26:41.727 [WARNING][4739] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" HandleID="k8s-pod-network.ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" Workload="localhost-k8s-coredns--7c65d6cfc9--qjjwl-eth0" Jul 11 00:26:41.738963 containerd[1542]: 2025-07-11 00:26:41.727 [INFO][4739] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" HandleID="k8s-pod-network.ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" Workload="localhost-k8s-coredns--7c65d6cfc9--qjjwl-eth0" Jul 11 00:26:41.738963 containerd[1542]: 2025-07-11 00:26:41.729 [INFO][4739] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:26:41.738963 containerd[1542]: 2025-07-11 00:26:41.734 [INFO][4705] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" Jul 11 00:26:41.738963 containerd[1542]: time="2025-07-11T00:26:41.737351585Z" level=info msg="TearDown network for sandbox \"ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084\" successfully" Jul 11 00:26:41.738963 containerd[1542]: time="2025-07-11T00:26:41.737958584Z" level=info msg="StopPodSandbox for \"ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084\" returns successfully" Jul 11 00:26:41.739369 kubelet[2609]: E0711 00:26:41.738293 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:41.740753 systemd[1]: run-netns-cni\x2d831a50e7\x2db978\x2d22ce\x2d65bb\x2d10c9d6665bd0.mount: Deactivated successfully. Jul 11 00:26:41.742917 containerd[1542]: time="2025-07-11T00:26:41.741034420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qjjwl,Uid:65271ea6-5134-4ff0-a88f-9119ebccb488,Namespace:kube-system,Attempt:1,}" Jul 11 00:26:41.755620 containerd[1542]: 2025-07-11 00:26:41.680 [INFO][4707] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" Jul 11 00:26:41.755620 containerd[1542]: 2025-07-11 00:26:41.682 [INFO][4707] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" iface="eth0" netns="/var/run/netns/cni-c903ab35-e6a3-dada-9ef4-7d4ac31baa00" Jul 11 00:26:41.755620 containerd[1542]: 2025-07-11 00:26:41.683 [INFO][4707] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" iface="eth0" netns="/var/run/netns/cni-c903ab35-e6a3-dada-9ef4-7d4ac31baa00" Jul 11 00:26:41.755620 containerd[1542]: 2025-07-11 00:26:41.683 [INFO][4707] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" iface="eth0" netns="/var/run/netns/cni-c903ab35-e6a3-dada-9ef4-7d4ac31baa00" Jul 11 00:26:41.755620 containerd[1542]: 2025-07-11 00:26:41.683 [INFO][4707] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" Jul 11 00:26:41.755620 containerd[1542]: 2025-07-11 00:26:41.683 [INFO][4707] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" Jul 11 00:26:41.755620 containerd[1542]: 2025-07-11 00:26:41.709 [INFO][4737] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" HandleID="k8s-pod-network.6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" Workload="localhost-k8s-calico--apiserver--76d459d97b--xlnxl-eth0" Jul 11 00:26:41.755620 containerd[1542]: 2025-07-11 00:26:41.709 [INFO][4737] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:26:41.755620 containerd[1542]: 2025-07-11 00:26:41.729 [INFO][4737] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:26:41.755620 containerd[1542]: 2025-07-11 00:26:41.744 [WARNING][4737] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" HandleID="k8s-pod-network.6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" Workload="localhost-k8s-calico--apiserver--76d459d97b--xlnxl-eth0" Jul 11 00:26:41.755620 containerd[1542]: 2025-07-11 00:26:41.744 [INFO][4737] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" HandleID="k8s-pod-network.6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" Workload="localhost-k8s-calico--apiserver--76d459d97b--xlnxl-eth0" Jul 11 00:26:41.755620 containerd[1542]: 2025-07-11 00:26:41.747 [INFO][4737] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:26:41.755620 containerd[1542]: 2025-07-11 00:26:41.751 [INFO][4707] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" Jul 11 00:26:41.756038 containerd[1542]: time="2025-07-11T00:26:41.755809277Z" level=info msg="TearDown network for sandbox \"6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3\" successfully" Jul 11 00:26:41.756038 containerd[1542]: time="2025-07-11T00:26:41.755857717Z" level=info msg="StopPodSandbox for \"6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3\" returns successfully" Jul 11 00:26:41.756482 containerd[1542]: time="2025-07-11T00:26:41.756447876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d459d97b-xlnxl,Uid:240131e8-c1af-4198-9629-bd8842d57a9c,Namespace:calico-apiserver,Attempt:1,}" Jul 11 00:26:41.759915 systemd[1]: run-netns-cni\x2dc903ab35\x2de6a3\x2ddada\x2d9ef4\x2d7d4ac31baa00.mount: Deactivated successfully. Jul 11 00:26:41.785510 kubelet[2609]: I0711 00:26:41.785169 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6bbc6b4cc-xqcp5" podStartSLOduration=19.901588614 podStartE2EDuration="21.785151392s" podCreationTimestamp="2025-07-11 00:26:20 +0000 UTC" firstStartedPulling="2025-07-11 00:26:38.928588892 +0000 UTC m=+40.418146510" lastFinishedPulling="2025-07-11 00:26:40.81215171 +0000 UTC m=+42.301709288" observedRunningTime="2025-07-11 00:26:41.785119952 +0000 UTC m=+43.274677570" watchObservedRunningTime="2025-07-11 00:26:41.785151392 +0000 UTC m=+43.274709010" Jul 11 00:26:41.906065 systemd-networkd[1230]: cali990a5d7ad6e: Gained IPv6LL Jul 11 00:26:41.923272 systemd-networkd[1230]: cali64d5e4f4734: Link UP Jul 11 00:26:41.923996 systemd-networkd[1230]: cali64d5e4f4734: Gained carrier Jul 11 00:26:41.944384 containerd[1542]: 2025-07-11 00:26:41.787 [INFO][4754] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--677wx-eth0 coredns-7c65d6cfc9- kube-system 68360d6b-1341-4cc8-9b2e-e1fda0c521fc 987 0 2025-07-11 00:26:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-677wx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali64d5e4f4734 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="73b24adcc892549409f3c0d23a34767f4002128559bc40887f52806d196ea543" Namespace="kube-system" Pod="coredns-7c65d6cfc9-677wx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--677wx-" Jul 11 00:26:41.944384 containerd[1542]: 2025-07-11 00:26:41.787 [INFO][4754] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="73b24adcc892549409f3c0d23a34767f4002128559bc40887f52806d196ea543" Namespace="kube-system" Pod="coredns-7c65d6cfc9-677wx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--677wx-eth0" Jul 11 00:26:41.944384 containerd[1542]: 2025-07-11 00:26:41.835 [INFO][4791] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="73b24adcc892549409f3c0d23a34767f4002128559bc40887f52806d196ea543" HandleID="k8s-pod-network.73b24adcc892549409f3c0d23a34767f4002128559bc40887f52806d196ea543" Workload="localhost-k8s-coredns--7c65d6cfc9--677wx-eth0" Jul 11 00:26:41.944384 containerd[1542]: 2025-07-11 00:26:41.835 [INFO][4791] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="73b24adcc892549409f3c0d23a34767f4002128559bc40887f52806d196ea543" HandleID="k8s-pod-network.73b24adcc892549409f3c0d23a34767f4002128559bc40887f52806d196ea543" Workload="localhost-k8s-coredns--7c65d6cfc9--677wx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a2540), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-677wx", "timestamp":"2025-07-11 00:26:41.835204675 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:26:41.944384 containerd[1542]: 2025-07-11 00:26:41.835 [INFO][4791] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:26:41.944384 containerd[1542]: 2025-07-11 00:26:41.835 [INFO][4791] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:26:41.944384 containerd[1542]: 2025-07-11 00:26:41.835 [INFO][4791] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:26:41.944384 containerd[1542]: 2025-07-11 00:26:41.852 [INFO][4791] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.73b24adcc892549409f3c0d23a34767f4002128559bc40887f52806d196ea543" host="localhost" Jul 11 00:26:41.944384 containerd[1542]: 2025-07-11 00:26:41.857 [INFO][4791] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:26:41.944384 containerd[1542]: 2025-07-11 00:26:41.862 [INFO][4791] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:26:41.944384 containerd[1542]: 2025-07-11 00:26:41.865 [INFO][4791] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:26:41.944384 containerd[1542]: 2025-07-11 00:26:41.871 [INFO][4791] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:26:41.944384 containerd[1542]: 2025-07-11 00:26:41.871 [INFO][4791] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.73b24adcc892549409f3c0d23a34767f4002128559bc40887f52806d196ea543" host="localhost" Jul 11 00:26:41.944384 containerd[1542]: 2025-07-11 00:26:41.875 [INFO][4791] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.73b24adcc892549409f3c0d23a34767f4002128559bc40887f52806d196ea543 Jul 11 00:26:41.944384 containerd[1542]: 2025-07-11 00:26:41.895 [INFO][4791] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.73b24adcc892549409f3c0d23a34767f4002128559bc40887f52806d196ea543" host="localhost" Jul 11 00:26:41.944384 containerd[1542]: 2025-07-11 00:26:41.911 [INFO][4791] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.73b24adcc892549409f3c0d23a34767f4002128559bc40887f52806d196ea543" host="localhost" Jul 11 00:26:41.944384 containerd[1542]: 2025-07-11 00:26:41.911 [INFO][4791] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.73b24adcc892549409f3c0d23a34767f4002128559bc40887f52806d196ea543" host="localhost" Jul 11 00:26:41.944384 containerd[1542]: 2025-07-11 00:26:41.911 [INFO][4791] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:26:41.944384 containerd[1542]: 2025-07-11 00:26:41.911 [INFO][4791] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="73b24adcc892549409f3c0d23a34767f4002128559bc40887f52806d196ea543" HandleID="k8s-pod-network.73b24adcc892549409f3c0d23a34767f4002128559bc40887f52806d196ea543" Workload="localhost-k8s-coredns--7c65d6cfc9--677wx-eth0" Jul 11 00:26:41.945201 containerd[1542]: 2025-07-11 00:26:41.915 [INFO][4754] cni-plugin/k8s.go 418: Populated endpoint ContainerID="73b24adcc892549409f3c0d23a34767f4002128559bc40887f52806d196ea543" Namespace="kube-system" Pod="coredns-7c65d6cfc9-677wx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--677wx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--677wx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"68360d6b-1341-4cc8-9b2e-e1fda0c521fc", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-677wx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali64d5e4f4734", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:26:41.945201 containerd[1542]: 2025-07-11 00:26:41.915 [INFO][4754] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="73b24adcc892549409f3c0d23a34767f4002128559bc40887f52806d196ea543" Namespace="kube-system" Pod="coredns-7c65d6cfc9-677wx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--677wx-eth0" Jul 11 00:26:41.945201 containerd[1542]: 2025-07-11 00:26:41.915 [INFO][4754] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali64d5e4f4734 ContainerID="73b24adcc892549409f3c0d23a34767f4002128559bc40887f52806d196ea543" Namespace="kube-system" Pod="coredns-7c65d6cfc9-677wx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--677wx-eth0" Jul 11 00:26:41.945201 containerd[1542]: 2025-07-11 00:26:41.924 [INFO][4754] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="73b24adcc892549409f3c0d23a34767f4002128559bc40887f52806d196ea543" Namespace="kube-system" Pod="coredns-7c65d6cfc9-677wx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--677wx-eth0" Jul 11 00:26:41.945201 containerd[1542]: 2025-07-11 00:26:41.925 [INFO][4754] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="73b24adcc892549409f3c0d23a34767f4002128559bc40887f52806d196ea543" Namespace="kube-system" Pod="coredns-7c65d6cfc9-677wx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--677wx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--677wx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"68360d6b-1341-4cc8-9b2e-e1fda0c521fc", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"73b24adcc892549409f3c0d23a34767f4002128559bc40887f52806d196ea543", Pod:"coredns-7c65d6cfc9-677wx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali64d5e4f4734", MAC:"32:e6:0b:37:01:8f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:26:41.945201 containerd[1542]: 2025-07-11 00:26:41.934 [INFO][4754] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="73b24adcc892549409f3c0d23a34767f4002128559bc40887f52806d196ea543" Namespace="kube-system" Pod="coredns-7c65d6cfc9-677wx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--677wx-eth0" Jul 11 00:26:42.062699 systemd-networkd[1230]: califc024b69f44: Link UP Jul 11 00:26:42.062914 systemd-networkd[1230]: califc024b69f44: Gained carrier Jul 11 00:26:42.075090 containerd[1542]: time="2025-07-11T00:26:42.072110996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:26:42.075090 containerd[1542]: time="2025-07-11T00:26:42.072188156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:26:42.075090 containerd[1542]: time="2025-07-11T00:26:42.072203356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:42.080815 containerd[1542]: time="2025-07-11T00:26:42.072326116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:42.080815 containerd[1542]: 2025-07-11 00:26:41.818 [INFO][4766] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--qjjwl-eth0 coredns-7c65d6cfc9- kube-system 65271ea6-5134-4ff0-a88f-9119ebccb488 988 0 2025-07-11 00:26:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-qjjwl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califc024b69f44 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2250d33d8890a14be23d2ff79f9fbfcf1fb60683c8ac115f7ba1a87bc22efc79" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qjjwl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qjjwl-" Jul 11 00:26:42.080815 containerd[1542]: 2025-07-11 00:26:41.818 [INFO][4766] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2250d33d8890a14be23d2ff79f9fbfcf1fb60683c8ac115f7ba1a87bc22efc79" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qjjwl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qjjwl-eth0" Jul 11 00:26:42.080815 containerd[1542]: 2025-07-11 00:26:41.855 [INFO][4803] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2250d33d8890a14be23d2ff79f9fbfcf1fb60683c8ac115f7ba1a87bc22efc79" HandleID="k8s-pod-network.2250d33d8890a14be23d2ff79f9fbfcf1fb60683c8ac115f7ba1a87bc22efc79" Workload="localhost-k8s-coredns--7c65d6cfc9--qjjwl-eth0" Jul 11 00:26:42.080815 containerd[1542]: 2025-07-11 00:26:41.855 [INFO][4803] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2250d33d8890a14be23d2ff79f9fbfcf1fb60683c8ac115f7ba1a87bc22efc79" HandleID="k8s-pod-network.2250d33d8890a14be23d2ff79f9fbfcf1fb60683c8ac115f7ba1a87bc22efc79" Workload="localhost-k8s-coredns--7c65d6cfc9--qjjwl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004365d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-qjjwl", "timestamp":"2025-07-11 00:26:41.855531443 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:26:42.080815 containerd[1542]: 2025-07-11 00:26:41.855 [INFO][4803] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:26:42.080815 containerd[1542]: 2025-07-11 00:26:41.911 [INFO][4803] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:26:42.080815 containerd[1542]: 2025-07-11 00:26:41.913 [INFO][4803] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:26:42.080815 containerd[1542]: 2025-07-11 00:26:41.954 [INFO][4803] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2250d33d8890a14be23d2ff79f9fbfcf1fb60683c8ac115f7ba1a87bc22efc79" host="localhost" Jul 11 00:26:42.080815 containerd[1542]: 2025-07-11 00:26:41.959 [INFO][4803] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:26:42.080815 containerd[1542]: 2025-07-11 00:26:41.963 [INFO][4803] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:26:42.080815 containerd[1542]: 2025-07-11 00:26:41.965 [INFO][4803] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:26:42.080815 containerd[1542]: 2025-07-11 00:26:41.968 [INFO][4803] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:26:42.080815 containerd[1542]: 2025-07-11 00:26:41.969 [INFO][4803] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2250d33d8890a14be23d2ff79f9fbfcf1fb60683c8ac115f7ba1a87bc22efc79" host="localhost" Jul 11 00:26:42.080815 containerd[1542]: 2025-07-11 00:26:41.970 [INFO][4803] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2250d33d8890a14be23d2ff79f9fbfcf1fb60683c8ac115f7ba1a87bc22efc79 Jul 11 00:26:42.080815 containerd[1542]: 2025-07-11 00:26:41.986 [INFO][4803] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2250d33d8890a14be23d2ff79f9fbfcf1fb60683c8ac115f7ba1a87bc22efc79" host="localhost" Jul 11 00:26:42.080815 containerd[1542]: 2025-07-11 00:26:42.051 [INFO][4803] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.2250d33d8890a14be23d2ff79f9fbfcf1fb60683c8ac115f7ba1a87bc22efc79" host="localhost" Jul 11 00:26:42.080815 containerd[1542]: 2025-07-11 00:26:42.051 [INFO][4803] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.2250d33d8890a14be23d2ff79f9fbfcf1fb60683c8ac115f7ba1a87bc22efc79" host="localhost" Jul 11 00:26:42.080815 containerd[1542]: 2025-07-11 00:26:42.051 [INFO][4803] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:26:42.080815 containerd[1542]: 2025-07-11 00:26:42.051 [INFO][4803] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="2250d33d8890a14be23d2ff79f9fbfcf1fb60683c8ac115f7ba1a87bc22efc79" HandleID="k8s-pod-network.2250d33d8890a14be23d2ff79f9fbfcf1fb60683c8ac115f7ba1a87bc22efc79" Workload="localhost-k8s-coredns--7c65d6cfc9--qjjwl-eth0" Jul 11 00:26:42.081312 containerd[1542]: 2025-07-11 00:26:42.059 [INFO][4766] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2250d33d8890a14be23d2ff79f9fbfcf1fb60683c8ac115f7ba1a87bc22efc79" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qjjwl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qjjwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--qjjwl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"65271ea6-5134-4ff0-a88f-9119ebccb488", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-qjjwl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc024b69f44", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:26:42.081312 containerd[1542]: 2025-07-11 00:26:42.059 [INFO][4766] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="2250d33d8890a14be23d2ff79f9fbfcf1fb60683c8ac115f7ba1a87bc22efc79" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qjjwl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qjjwl-eth0" Jul 11 00:26:42.081312 containerd[1542]: 2025-07-11 00:26:42.059 [INFO][4766] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc024b69f44 ContainerID="2250d33d8890a14be23d2ff79f9fbfcf1fb60683c8ac115f7ba1a87bc22efc79" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qjjwl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qjjwl-eth0" Jul 11 00:26:42.081312 containerd[1542]: 2025-07-11 00:26:42.063 [INFO][4766] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2250d33d8890a14be23d2ff79f9fbfcf1fb60683c8ac115f7ba1a87bc22efc79" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qjjwl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qjjwl-eth0" Jul 11 00:26:42.081312 containerd[1542]: 2025-07-11 00:26:42.063 [INFO][4766] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2250d33d8890a14be23d2ff79f9fbfcf1fb60683c8ac115f7ba1a87bc22efc79" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qjjwl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qjjwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--qjjwl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"65271ea6-5134-4ff0-a88f-9119ebccb488", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2250d33d8890a14be23d2ff79f9fbfcf1fb60683c8ac115f7ba1a87bc22efc79", Pod:"coredns-7c65d6cfc9-qjjwl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc024b69f44", MAC:"8a:fc:72:da:21:f7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:26:42.081312 containerd[1542]: 2025-07-11 00:26:42.076 [INFO][4766] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2250d33d8890a14be23d2ff79f9fbfcf1fb60683c8ac115f7ba1a87bc22efc79" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qjjwl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qjjwl-eth0" Jul 11 00:26:42.114446 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:26:42.124139 systemd-networkd[1230]: cali9758fb15c91: Link UP Jul 11 00:26:42.124566 containerd[1542]: time="2025-07-11T00:26:42.123342362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:26:42.125065 containerd[1542]: time="2025-07-11T00:26:42.124895120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:26:42.125065 containerd[1542]: time="2025-07-11T00:26:42.124921520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:42.125188 systemd-networkd[1230]: cali9758fb15c91: Gained carrier Jul 11 00:26:42.130073 containerd[1542]: time="2025-07-11T00:26:42.129992033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:42.150058 containerd[1542]: 2025-07-11 00:26:41.850 [INFO][4778] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--76d459d97b--xlnxl-eth0 calico-apiserver-76d459d97b- calico-apiserver 240131e8-c1af-4198-9629-bd8842d57a9c 989 0 2025-07-11 00:26:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76d459d97b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-76d459d97b-xlnxl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9758fb15c91 [] [] }} ContainerID="4d0bf605fffcace2a930ca342d4fc3e155eb4d93e00d52115abe64466c5424cc" Namespace="calico-apiserver" Pod="calico-apiserver-76d459d97b-xlnxl" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d459d97b--xlnxl-" Jul 11 00:26:42.150058 containerd[1542]: 2025-07-11 00:26:41.851 [INFO][4778] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4d0bf605fffcace2a930ca342d4fc3e155eb4d93e00d52115abe64466c5424cc" Namespace="calico-apiserver" Pod="calico-apiserver-76d459d97b-xlnxl" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d459d97b--xlnxl-eth0" Jul 11 00:26:42.150058 containerd[1542]: 2025-07-11 00:26:41.939 [INFO][4814] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d0bf605fffcace2a930ca342d4fc3e155eb4d93e00d52115abe64466c5424cc" HandleID="k8s-pod-network.4d0bf605fffcace2a930ca342d4fc3e155eb4d93e00d52115abe64466c5424cc" Workload="localhost-k8s-calico--apiserver--76d459d97b--xlnxl-eth0" Jul 11 00:26:42.150058 containerd[1542]: 2025-07-11 00:26:41.940 [INFO][4814] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4d0bf605fffcace2a930ca342d4fc3e155eb4d93e00d52115abe64466c5424cc" HandleID="k8s-pod-network.4d0bf605fffcace2a930ca342d4fc3e155eb4d93e00d52115abe64466c5424cc" Workload="localhost-k8s-calico--apiserver--76d459d97b--xlnxl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000531160), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-76d459d97b-xlnxl", "timestamp":"2025-07-11 00:26:41.939690274 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:26:42.150058 containerd[1542]: 2025-07-11 00:26:41.940 [INFO][4814] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:26:42.150058 containerd[1542]: 2025-07-11 00:26:42.052 [INFO][4814] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:26:42.150058 containerd[1542]: 2025-07-11 00:26:42.052 [INFO][4814] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:26:42.150058 containerd[1542]: 2025-07-11 00:26:42.062 [INFO][4814] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4d0bf605fffcace2a930ca342d4fc3e155eb4d93e00d52115abe64466c5424cc" host="localhost" Jul 11 00:26:42.150058 containerd[1542]: 2025-07-11 00:26:42.078 [INFO][4814] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:26:42.150058 containerd[1542]: 2025-07-11 00:26:42.087 [INFO][4814] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:26:42.150058 containerd[1542]: 2025-07-11 00:26:42.090 [INFO][4814] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:26:42.150058 containerd[1542]: 2025-07-11 00:26:42.093 [INFO][4814] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:26:42.150058 containerd[1542]: 2025-07-11 00:26:42.093 [INFO][4814] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4d0bf605fffcace2a930ca342d4fc3e155eb4d93e00d52115abe64466c5424cc" host="localhost" Jul 11 00:26:42.150058 containerd[1542]: 2025-07-11 00:26:42.095 [INFO][4814] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4d0bf605fffcace2a930ca342d4fc3e155eb4d93e00d52115abe64466c5424cc Jul 11 00:26:42.150058 containerd[1542]: 2025-07-11 00:26:42.100 [INFO][4814] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4d0bf605fffcace2a930ca342d4fc3e155eb4d93e00d52115abe64466c5424cc" host="localhost" Jul 11 00:26:42.150058 containerd[1542]: 2025-07-11 00:26:42.108 [INFO][4814] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.4d0bf605fffcace2a930ca342d4fc3e155eb4d93e00d52115abe64466c5424cc" host="localhost" Jul 11 00:26:42.150058 containerd[1542]: 2025-07-11 00:26:42.109 [INFO][4814] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.4d0bf605fffcace2a930ca342d4fc3e155eb4d93e00d52115abe64466c5424cc" host="localhost" Jul 11 00:26:42.150058 containerd[1542]: 2025-07-11 00:26:42.109 [INFO][4814] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:26:42.150058 containerd[1542]: 2025-07-11 00:26:42.109 [INFO][4814] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="4d0bf605fffcace2a930ca342d4fc3e155eb4d93e00d52115abe64466c5424cc" HandleID="k8s-pod-network.4d0bf605fffcace2a930ca342d4fc3e155eb4d93e00d52115abe64466c5424cc" Workload="localhost-k8s-calico--apiserver--76d459d97b--xlnxl-eth0" Jul 11 00:26:42.150600 containerd[1542]: 2025-07-11 00:26:42.120 [INFO][4778] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4d0bf605fffcace2a930ca342d4fc3e155eb4d93e00d52115abe64466c5424cc" Namespace="calico-apiserver" Pod="calico-apiserver-76d459d97b-xlnxl" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d459d97b--xlnxl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76d459d97b--xlnxl-eth0", GenerateName:"calico-apiserver-76d459d97b-", Namespace:"calico-apiserver", SelfLink:"", UID:"240131e8-c1af-4198-9629-bd8842d57a9c", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76d459d97b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-76d459d97b-xlnxl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9758fb15c91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:26:42.150600 containerd[1542]: 2025-07-11 00:26:42.121 [INFO][4778] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="4d0bf605fffcace2a930ca342d4fc3e155eb4d93e00d52115abe64466c5424cc" Namespace="calico-apiserver" Pod="calico-apiserver-76d459d97b-xlnxl" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d459d97b--xlnxl-eth0" Jul 11 00:26:42.150600 containerd[1542]: 2025-07-11 00:26:42.121 [INFO][4778] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9758fb15c91 ContainerID="4d0bf605fffcace2a930ca342d4fc3e155eb4d93e00d52115abe64466c5424cc" Namespace="calico-apiserver" Pod="calico-apiserver-76d459d97b-xlnxl" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d459d97b--xlnxl-eth0" Jul 11 00:26:42.150600 containerd[1542]: 2025-07-11 00:26:42.131 [INFO][4778] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d0bf605fffcace2a930ca342d4fc3e155eb4d93e00d52115abe64466c5424cc" Namespace="calico-apiserver" Pod="calico-apiserver-76d459d97b-xlnxl" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d459d97b--xlnxl-eth0" Jul 11 00:26:42.150600 containerd[1542]: 2025-07-11 00:26:42.132 [INFO][4778] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4d0bf605fffcace2a930ca342d4fc3e155eb4d93e00d52115abe64466c5424cc" Namespace="calico-apiserver" Pod="calico-apiserver-76d459d97b-xlnxl" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d459d97b--xlnxl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76d459d97b--xlnxl-eth0", GenerateName:"calico-apiserver-76d459d97b-", Namespace:"calico-apiserver", SelfLink:"", UID:"240131e8-c1af-4198-9629-bd8842d57a9c", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76d459d97b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4d0bf605fffcace2a930ca342d4fc3e155eb4d93e00d52115abe64466c5424cc", Pod:"calico-apiserver-76d459d97b-xlnxl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9758fb15c91", MAC:"ba:08:da:da:88:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:26:42.150600 containerd[1542]: 2025-07-11 00:26:42.143 [INFO][4778] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4d0bf605fffcace2a930ca342d4fc3e155eb4d93e00d52115abe64466c5424cc" Namespace="calico-apiserver" Pod="calico-apiserver-76d459d97b-xlnxl" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d459d97b--xlnxl-eth0" Jul 11 00:26:42.176013 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:26:42.199515 containerd[1542]: time="2025-07-11T00:26:42.199472092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-677wx,Uid:68360d6b-1341-4cc8-9b2e-e1fda0c521fc,Namespace:kube-system,Attempt:1,} returns sandbox id \"73b24adcc892549409f3c0d23a34767f4002128559bc40887f52806d196ea543\"" Jul 11 00:26:42.202876 kubelet[2609]: E0711 00:26:42.202847 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:42.204392 containerd[1542]: time="2025-07-11T00:26:42.204235006Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:26:42.204392 containerd[1542]: time="2025-07-11T00:26:42.204304765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:26:42.204392 containerd[1542]: time="2025-07-11T00:26:42.204320005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:42.204525 containerd[1542]: time="2025-07-11T00:26:42.204411285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:42.207538 containerd[1542]: time="2025-07-11T00:26:42.206479322Z" level=info msg="CreateContainer within sandbox \"73b24adcc892549409f3c0d23a34767f4002128559bc40887f52806d196ea543\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:26:42.238170 containerd[1542]: time="2025-07-11T00:26:42.238120597Z" level=info msg="CreateContainer within sandbox \"73b24adcc892549409f3c0d23a34767f4002128559bc40887f52806d196ea543\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"983beafcc342a16da879f72758c81aab1aa0211742ba38b67071619cbe9eb8ff\"" Jul 11 00:26:42.238304 containerd[1542]: time="2025-07-11T00:26:42.238224636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qjjwl,Uid:65271ea6-5134-4ff0-a88f-9119ebccb488,Namespace:kube-system,Attempt:1,} returns sandbox id \"2250d33d8890a14be23d2ff79f9fbfcf1fb60683c8ac115f7ba1a87bc22efc79\"" Jul 11 00:26:42.239513 containerd[1542]: time="2025-07-11T00:26:42.238860156Z" level=info msg="StartContainer for \"983beafcc342a16da879f72758c81aab1aa0211742ba38b67071619cbe9eb8ff\"" Jul 11 00:26:42.239596 kubelet[2609]: E0711 00:26:42.239123 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:42.241383 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:26:42.243087 containerd[1542]: time="2025-07-11T00:26:42.243000430Z" level=info msg="CreateContainer within sandbox \"2250d33d8890a14be23d2ff79f9fbfcf1fb60683c8ac115f7ba1a87bc22efc79\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:26:42.269177 containerd[1542]: time="2025-07-11T00:26:42.269054712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d459d97b-xlnxl,Uid:240131e8-c1af-4198-9629-bd8842d57a9c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4d0bf605fffcace2a930ca342d4fc3e155eb4d93e00d52115abe64466c5424cc\"" Jul 11 00:26:42.276859 containerd[1542]: time="2025-07-11T00:26:42.276119662Z" level=info msg="CreateContainer within sandbox \"2250d33d8890a14be23d2ff79f9fbfcf1fb60683c8ac115f7ba1a87bc22efc79\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9b5f9bdcfe46425dd96226168ab9d2e82af162b0330f7b0c5cb59d08d3774ed6\"" Jul 11 00:26:42.288637 containerd[1542]: time="2025-07-11T00:26:42.288538884Z" level=info msg="StartContainer for \"9b5f9bdcfe46425dd96226168ab9d2e82af162b0330f7b0c5cb59d08d3774ed6\"" Jul 11 00:26:42.334904 containerd[1542]: time="2025-07-11T00:26:42.334676337Z" level=info msg="StartContainer for \"983beafcc342a16da879f72758c81aab1aa0211742ba38b67071619cbe9eb8ff\" returns successfully" Jul 11 00:26:42.362434 containerd[1542]: time="2025-07-11T00:26:42.362335497Z" level=info msg="StartContainer for \"9b5f9bdcfe46425dd96226168ab9d2e82af162b0330f7b0c5cb59d08d3774ed6\" returns successfully" Jul 11 00:26:42.601402 containerd[1542]: time="2025-07-11T00:26:42.601359112Z" level=info msg="StopPodSandbox for \"691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c\"" Jul 11 00:26:42.719258 containerd[1542]: 2025-07-11 00:26:42.664 [INFO][5064] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" Jul 11 00:26:42.719258 containerd[1542]: 2025-07-11 00:26:42.664 [INFO][5064] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" iface="eth0" netns="/var/run/netns/cni-09b4aec7-d4da-3fce-8623-4879148ecc16" Jul 11 00:26:42.719258 containerd[1542]: 2025-07-11 00:26:42.665 [INFO][5064] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" iface="eth0" netns="/var/run/netns/cni-09b4aec7-d4da-3fce-8623-4879148ecc16" Jul 11 00:26:42.719258 containerd[1542]: 2025-07-11 00:26:42.665 [INFO][5064] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" iface="eth0" netns="/var/run/netns/cni-09b4aec7-d4da-3fce-8623-4879148ecc16" Jul 11 00:26:42.719258 containerd[1542]: 2025-07-11 00:26:42.665 [INFO][5064] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" Jul 11 00:26:42.719258 containerd[1542]: 2025-07-11 00:26:42.665 [INFO][5064] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" Jul 11 00:26:42.719258 containerd[1542]: 2025-07-11 00:26:42.694 [INFO][5072] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" HandleID="k8s-pod-network.691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" Workload="localhost-k8s-calico--apiserver--76d459d97b--9692q-eth0" Jul 11 00:26:42.719258 containerd[1542]: 2025-07-11 00:26:42.697 [INFO][5072] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:26:42.719258 containerd[1542]: 2025-07-11 00:26:42.697 [INFO][5072] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:26:42.719258 containerd[1542]: 2025-07-11 00:26:42.709 [WARNING][5072] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" HandleID="k8s-pod-network.691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" Workload="localhost-k8s-calico--apiserver--76d459d97b--9692q-eth0" Jul 11 00:26:42.719258 containerd[1542]: 2025-07-11 00:26:42.709 [INFO][5072] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" HandleID="k8s-pod-network.691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" Workload="localhost-k8s-calico--apiserver--76d459d97b--9692q-eth0" Jul 11 00:26:42.719258 containerd[1542]: 2025-07-11 00:26:42.711 [INFO][5072] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:26:42.719258 containerd[1542]: 2025-07-11 00:26:42.714 [INFO][5064] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" Jul 11 00:26:42.719646 containerd[1542]: time="2025-07-11T00:26:42.719560821Z" level=info msg="TearDown network for sandbox \"691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c\" successfully" Jul 11 00:26:42.719646 containerd[1542]: time="2025-07-11T00:26:42.719589981Z" level=info msg="StopPodSandbox for \"691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c\" returns successfully" Jul 11 00:26:42.720340 containerd[1542]: time="2025-07-11T00:26:42.720314620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d459d97b-9692q,Uid:f54b44fa-bd20-4c12-91fe-34f4011b849a,Namespace:calico-apiserver,Attempt:1,}" Jul 11 00:26:42.773630 kubelet[2609]: E0711 00:26:42.773512 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:42.776223 kubelet[2609]: E0711 00:26:42.775988 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:42.776999 kubelet[2609]: I0711 00:26:42.776858 2609 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:26:42.788202 kubelet[2609]: I0711 00:26:42.787499 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-677wx" podStartSLOduration=36.787482283 podStartE2EDuration="36.787482283s" podCreationTimestamp="2025-07-11 00:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:26:42.787097644 +0000 UTC m=+44.276655262" watchObservedRunningTime="2025-07-11 00:26:42.787482283 +0000 UTC m=+44.277039901" Jul 11 00:26:42.819720 kubelet[2609]: I0711 00:26:42.819370 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-qjjwl" podStartSLOduration=36.819348237 podStartE2EDuration="36.819348237s" podCreationTimestamp="2025-07-11 00:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:26:42.818234119 +0000 UTC m=+44.307791737" watchObservedRunningTime="2025-07-11 00:26:42.819348237 +0000 UTC m=+44.308905855" Jul 11 00:26:42.880536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount633739116.mount: Deactivated successfully. Jul 11 00:26:42.881320 systemd[1]: run-netns-cni\x2d09b4aec7\x2dd4da\x2d3fce\x2d8623\x2d4879148ecc16.mount: Deactivated successfully. Jul 11 00:26:42.960324 systemd-networkd[1230]: cali39cbe2affa0: Link UP Jul 11 00:26:42.960544 systemd-networkd[1230]: cali39cbe2affa0: Gained carrier Jul 11 00:26:42.977856 containerd[1542]: 2025-07-11 00:26:42.856 [INFO][5085] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--76d459d97b--9692q-eth0 calico-apiserver-76d459d97b- calico-apiserver f54b44fa-bd20-4c12-91fe-34f4011b849a 1021 0 2025-07-11 00:26:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76d459d97b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-76d459d97b-9692q eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali39cbe2affa0 [] [] }} ContainerID="1867e5c5a911f503da982d4b6ab4367483de36c1b18795f11529cdcc9831e1d9" Namespace="calico-apiserver" Pod="calico-apiserver-76d459d97b-9692q" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d459d97b--9692q-" Jul 11 00:26:42.977856 containerd[1542]: 2025-07-11 00:26:42.856 [INFO][5085] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1867e5c5a911f503da982d4b6ab4367483de36c1b18795f11529cdcc9831e1d9" Namespace="calico-apiserver" Pod="calico-apiserver-76d459d97b-9692q" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d459d97b--9692q-eth0" Jul 11 00:26:42.977856 containerd[1542]: 2025-07-11 00:26:42.901 [INFO][5103] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1867e5c5a911f503da982d4b6ab4367483de36c1b18795f11529cdcc9831e1d9" HandleID="k8s-pod-network.1867e5c5a911f503da982d4b6ab4367483de36c1b18795f11529cdcc9831e1d9" Workload="localhost-k8s-calico--apiserver--76d459d97b--9692q-eth0" Jul 11 00:26:42.977856 containerd[1542]: 2025-07-11 00:26:42.901 [INFO][5103] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1867e5c5a911f503da982d4b6ab4367483de36c1b18795f11529cdcc9831e1d9" HandleID="k8s-pod-network.1867e5c5a911f503da982d4b6ab4367483de36c1b18795f11529cdcc9831e1d9" Workload="localhost-k8s-calico--apiserver--76d459d97b--9692q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137670), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-76d459d97b-9692q", "timestamp":"2025-07-11 00:26:42.901801518 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:26:42.977856 containerd[1542]: 2025-07-11 00:26:42.902 [INFO][5103] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:26:42.977856 containerd[1542]: 2025-07-11 00:26:42.902 [INFO][5103] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:26:42.977856 containerd[1542]: 2025-07-11 00:26:42.902 [INFO][5103] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:26:42.977856 containerd[1542]: 2025-07-11 00:26:42.916 [INFO][5103] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1867e5c5a911f503da982d4b6ab4367483de36c1b18795f11529cdcc9831e1d9" host="localhost" Jul 11 00:26:42.977856 containerd[1542]: 2025-07-11 00:26:42.921 [INFO][5103] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:26:42.977856 containerd[1542]: 2025-07-11 00:26:42.926 [INFO][5103] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:26:42.977856 containerd[1542]: 2025-07-11 00:26:42.928 [INFO][5103] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:26:42.977856 containerd[1542]: 2025-07-11 00:26:42.932 [INFO][5103] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:26:42.977856 containerd[1542]: 2025-07-11 00:26:42.932 [INFO][5103] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1867e5c5a911f503da982d4b6ab4367483de36c1b18795f11529cdcc9831e1d9" host="localhost" Jul 11 00:26:42.977856 containerd[1542]: 2025-07-11 00:26:42.933 [INFO][5103] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1867e5c5a911f503da982d4b6ab4367483de36c1b18795f11529cdcc9831e1d9 Jul 11 00:26:42.977856 containerd[1542]: 2025-07-11 00:26:42.941 [INFO][5103] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1867e5c5a911f503da982d4b6ab4367483de36c1b18795f11529cdcc9831e1d9" host="localhost" Jul 11 00:26:42.977856 containerd[1542]: 2025-07-11 00:26:42.947 [INFO][5103] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.1867e5c5a911f503da982d4b6ab4367483de36c1b18795f11529cdcc9831e1d9" host="localhost" Jul 11 00:26:42.977856 containerd[1542]: 2025-07-11 00:26:42.947 [INFO][5103] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.1867e5c5a911f503da982d4b6ab4367483de36c1b18795f11529cdcc9831e1d9" host="localhost" Jul 11 00:26:42.977856 containerd[1542]: 2025-07-11 00:26:42.947 [INFO][5103] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:26:42.977856 containerd[1542]: 2025-07-11 00:26:42.947 [INFO][5103] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="1867e5c5a911f503da982d4b6ab4367483de36c1b18795f11529cdcc9831e1d9" HandleID="k8s-pod-network.1867e5c5a911f503da982d4b6ab4367483de36c1b18795f11529cdcc9831e1d9" Workload="localhost-k8s-calico--apiserver--76d459d97b--9692q-eth0" Jul 11 00:26:42.978921 containerd[1542]: 2025-07-11 00:26:42.951 [INFO][5085] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1867e5c5a911f503da982d4b6ab4367483de36c1b18795f11529cdcc9831e1d9" Namespace="calico-apiserver" Pod="calico-apiserver-76d459d97b-9692q" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d459d97b--9692q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76d459d97b--9692q-eth0", GenerateName:"calico-apiserver-76d459d97b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f54b44fa-bd20-4c12-91fe-34f4011b849a", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76d459d97b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-76d459d97b-9692q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali39cbe2affa0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:26:42.978921 containerd[1542]: 2025-07-11 00:26:42.952 [INFO][5085] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="1867e5c5a911f503da982d4b6ab4367483de36c1b18795f11529cdcc9831e1d9" Namespace="calico-apiserver" Pod="calico-apiserver-76d459d97b-9692q" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d459d97b--9692q-eth0" Jul 11 00:26:42.978921 containerd[1542]: 2025-07-11 00:26:42.952 [INFO][5085] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali39cbe2affa0 ContainerID="1867e5c5a911f503da982d4b6ab4367483de36c1b18795f11529cdcc9831e1d9" Namespace="calico-apiserver" Pod="calico-apiserver-76d459d97b-9692q" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d459d97b--9692q-eth0" Jul 11 00:26:42.978921 containerd[1542]: 2025-07-11 00:26:42.960 [INFO][5085] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1867e5c5a911f503da982d4b6ab4367483de36c1b18795f11529cdcc9831e1d9" Namespace="calico-apiserver" Pod="calico-apiserver-76d459d97b-9692q" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d459d97b--9692q-eth0" Jul 11 00:26:42.978921 containerd[1542]: 2025-07-11 00:26:42.961 [INFO][5085] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1867e5c5a911f503da982d4b6ab4367483de36c1b18795f11529cdcc9831e1d9" Namespace="calico-apiserver" Pod="calico-apiserver-76d459d97b-9692q" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d459d97b--9692q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76d459d97b--9692q-eth0", GenerateName:"calico-apiserver-76d459d97b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f54b44fa-bd20-4c12-91fe-34f4011b849a", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76d459d97b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1867e5c5a911f503da982d4b6ab4367483de36c1b18795f11529cdcc9831e1d9", Pod:"calico-apiserver-76d459d97b-9692q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali39cbe2affa0", MAC:"3a:7f:3f:03:24:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:26:42.978921 containerd[1542]: 2025-07-11 00:26:42.974 [INFO][5085] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1867e5c5a911f503da982d4b6ab4367483de36c1b18795f11529cdcc9831e1d9" Namespace="calico-apiserver" Pod="calico-apiserver-76d459d97b-9692q" WorkloadEndpoint="localhost-k8s-calico--apiserver--76d459d97b--9692q-eth0" Jul 11 00:26:42.999395 containerd[1542]: time="2025-07-11T00:26:42.998908298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:26:42.999395 containerd[1542]: time="2025-07-11T00:26:42.999207017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:26:42.999395 containerd[1542]: time="2025-07-11T00:26:42.999221257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:42.999395 containerd[1542]: time="2025-07-11T00:26:42.999336337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:43.024521 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:26:43.043680 containerd[1542]: time="2025-07-11T00:26:43.043638677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76d459d97b-9692q,Uid:f54b44fa-bd20-4c12-91fe-34f4011b849a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1867e5c5a911f503da982d4b6ab4367483de36c1b18795f11529cdcc9831e1d9\"" Jul 11 00:26:43.311996 systemd-networkd[1230]: califc024b69f44: Gained IPv6LL Jul 11 00:26:43.314562 containerd[1542]: time="2025-07-11T00:26:43.314003031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:43.314562 containerd[1542]: time="2025-07-11T00:26:43.314530510Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 11 00:26:43.315475 containerd[1542]: time="2025-07-11T00:26:43.315448069Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:43.317802 containerd[1542]: time="2025-07-11T00:26:43.317762466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:43.318821 containerd[1542]: time="2025-07-11T00:26:43.318782944Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.506383154s" Jul 11 00:26:43.318949 containerd[1542]: time="2025-07-11T00:26:43.318930224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 11 00:26:43.319877 containerd[1542]: time="2025-07-11T00:26:43.319844223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 11 00:26:43.321944 containerd[1542]: time="2025-07-11T00:26:43.320813302Z" level=info msg="CreateContainer within sandbox \"3072e21ae63f85650b74cef603633a232e3601786257e6edb73350d02b1fdb4c\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 11 00:26:43.333084 containerd[1542]: time="2025-07-11T00:26:43.333050445Z" level=info msg="CreateContainer within sandbox \"3072e21ae63f85650b74cef603633a232e3601786257e6edb73350d02b1fdb4c\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"ef61e719416e82746224526e31a65f482598a84f2d456029f53d3423927b4ac4\"" Jul 11 00:26:43.333956 containerd[1542]: time="2025-07-11T00:26:43.333775884Z" level=info msg="StartContainer for \"ef61e719416e82746224526e31a65f482598a84f2d456029f53d3423927b4ac4\"" Jul 11 00:26:43.406516 containerd[1542]: time="2025-07-11T00:26:43.406392426Z" level=info msg="StartContainer for \"ef61e719416e82746224526e31a65f482598a84f2d456029f53d3423927b4ac4\" returns successfully" Jul 11 00:26:43.601197 containerd[1542]: time="2025-07-11T00:26:43.600949682Z" level=info msg="StopPodSandbox for \"d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e\"" Jul 11 00:26:43.631971 systemd-networkd[1230]: cali64d5e4f4734: Gained IPv6LL Jul 11 00:26:43.679640 containerd[1542]: 2025-07-11 00:26:43.646 [INFO][5215] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" Jul 11 00:26:43.679640 containerd[1542]: 2025-07-11 00:26:43.646 [INFO][5215] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" iface="eth0" netns="/var/run/netns/cni-dac3286a-af04-1af4-f1aa-36b416f539ca" Jul 11 00:26:43.679640 containerd[1542]: 2025-07-11 00:26:43.646 [INFO][5215] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" iface="eth0" netns="/var/run/netns/cni-dac3286a-af04-1af4-f1aa-36b416f539ca" Jul 11 00:26:43.679640 containerd[1542]: 2025-07-11 00:26:43.647 [INFO][5215] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" iface="eth0" netns="/var/run/netns/cni-dac3286a-af04-1af4-f1aa-36b416f539ca" Jul 11 00:26:43.679640 containerd[1542]: 2025-07-11 00:26:43.647 [INFO][5215] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" Jul 11 00:26:43.679640 containerd[1542]: 2025-07-11 00:26:43.647 [INFO][5215] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" Jul 11 00:26:43.679640 containerd[1542]: 2025-07-11 00:26:43.665 [INFO][5224] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" HandleID="k8s-pod-network.d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" Workload="localhost-k8s-csi--node--driver--79v4x-eth0" Jul 11 00:26:43.679640 containerd[1542]: 2025-07-11 00:26:43.665 [INFO][5224] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:26:43.679640 containerd[1542]: 2025-07-11 00:26:43.665 [INFO][5224] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:26:43.679640 containerd[1542]: 2025-07-11 00:26:43.674 [WARNING][5224] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" HandleID="k8s-pod-network.d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" Workload="localhost-k8s-csi--node--driver--79v4x-eth0" Jul 11 00:26:43.679640 containerd[1542]: 2025-07-11 00:26:43.674 [INFO][5224] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" HandleID="k8s-pod-network.d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" Workload="localhost-k8s-csi--node--driver--79v4x-eth0" Jul 11 00:26:43.679640 containerd[1542]: 2025-07-11 00:26:43.675 [INFO][5224] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:26:43.679640 containerd[1542]: 2025-07-11 00:26:43.677 [INFO][5215] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" Jul 11 00:26:43.679640 containerd[1542]: time="2025-07-11T00:26:43.679602176Z" level=info msg="TearDown network for sandbox \"d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e\" successfully" Jul 11 00:26:43.680259 containerd[1542]: time="2025-07-11T00:26:43.679915575Z" level=info msg="StopPodSandbox for \"d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e\" returns successfully" Jul 11 00:26:43.680857 containerd[1542]: time="2025-07-11T00:26:43.680497095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-79v4x,Uid:6af0ab6f-2e3c-4302-a4e5-e8d8f53a43ab,Namespace:calico-system,Attempt:1,}" Jul 11 00:26:43.778930 systemd-networkd[1230]: calie0abd4fc2dd: Link UP Jul 11 00:26:43.779158 systemd-networkd[1230]: calie0abd4fc2dd: Gained carrier Jul 11 00:26:43.801260 kubelet[2609]: E0711 00:26:43.799938 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:43.801260 kubelet[2609]: E0711 00:26:43.800506 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:43.801949 containerd[1542]: 2025-07-11 00:26:43.717 [INFO][5232] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--79v4x-eth0 csi-node-driver- calico-system 6af0ab6f-2e3c-4302-a4e5-e8d8f53a43ab 1050 0 2025-07-11 00:26:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-79v4x eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie0abd4fc2dd [] [] }} ContainerID="7bb3b876160f9d9d00b7bd998719811af844addeb1c2c1e387cff10854540146" Namespace="calico-system" Pod="csi-node-driver-79v4x" WorkloadEndpoint="localhost-k8s-csi--node--driver--79v4x-" Jul 11 00:26:43.801949 containerd[1542]: 2025-07-11 00:26:43.718 [INFO][5232] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7bb3b876160f9d9d00b7bd998719811af844addeb1c2c1e387cff10854540146" Namespace="calico-system" Pod="csi-node-driver-79v4x" WorkloadEndpoint="localhost-k8s-csi--node--driver--79v4x-eth0" Jul 11 00:26:43.801949 containerd[1542]: 2025-07-11 00:26:43.741 [INFO][5246] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7bb3b876160f9d9d00b7bd998719811af844addeb1c2c1e387cff10854540146" HandleID="k8s-pod-network.7bb3b876160f9d9d00b7bd998719811af844addeb1c2c1e387cff10854540146" Workload="localhost-k8s-csi--node--driver--79v4x-eth0" Jul 11 00:26:43.801949 containerd[1542]: 2025-07-11 00:26:43.741 [INFO][5246] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7bb3b876160f9d9d00b7bd998719811af844addeb1c2c1e387cff10854540146" HandleID="k8s-pod-network.7bb3b876160f9d9d00b7bd998719811af844addeb1c2c1e387cff10854540146" Workload="localhost-k8s-csi--node--driver--79v4x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd010), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-79v4x", "timestamp":"2025-07-11 00:26:43.741590412 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:26:43.801949 containerd[1542]: 2025-07-11 00:26:43.741 [INFO][5246] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:26:43.801949 containerd[1542]: 2025-07-11 00:26:43.741 [INFO][5246] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:26:43.801949 containerd[1542]: 2025-07-11 00:26:43.741 [INFO][5246] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:26:43.801949 containerd[1542]: 2025-07-11 00:26:43.750 [INFO][5246] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7bb3b876160f9d9d00b7bd998719811af844addeb1c2c1e387cff10854540146" host="localhost" Jul 11 00:26:43.801949 containerd[1542]: 2025-07-11 00:26:43.755 [INFO][5246] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:26:43.801949 containerd[1542]: 2025-07-11 00:26:43.759 [INFO][5246] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:26:43.801949 containerd[1542]: 2025-07-11 00:26:43.760 [INFO][5246] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:26:43.801949 containerd[1542]: 2025-07-11 00:26:43.763 [INFO][5246] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:26:43.801949 containerd[1542]: 2025-07-11 00:26:43.763 [INFO][5246] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7bb3b876160f9d9d00b7bd998719811af844addeb1c2c1e387cff10854540146" host="localhost" Jul 11 00:26:43.801949 containerd[1542]: 2025-07-11 00:26:43.764 [INFO][5246] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7bb3b876160f9d9d00b7bd998719811af844addeb1c2c1e387cff10854540146 Jul 11 00:26:43.801949 containerd[1542]: 2025-07-11 00:26:43.768 [INFO][5246] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7bb3b876160f9d9d00b7bd998719811af844addeb1c2c1e387cff10854540146" host="localhost" Jul 11 00:26:43.801949 containerd[1542]: 2025-07-11 00:26:43.774 [INFO][5246] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.7bb3b876160f9d9d00b7bd998719811af844addeb1c2c1e387cff10854540146" host="localhost" Jul 11 00:26:43.801949 containerd[1542]: 2025-07-11 00:26:43.774 [INFO][5246] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.7bb3b876160f9d9d00b7bd998719811af844addeb1c2c1e387cff10854540146" host="localhost" Jul 11 00:26:43.801949 containerd[1542]: 2025-07-11 00:26:43.774 [INFO][5246] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:26:43.801949 containerd[1542]: 2025-07-11 00:26:43.774 [INFO][5246] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="7bb3b876160f9d9d00b7bd998719811af844addeb1c2c1e387cff10854540146" HandleID="k8s-pod-network.7bb3b876160f9d9d00b7bd998719811af844addeb1c2c1e387cff10854540146" Workload="localhost-k8s-csi--node--driver--79v4x-eth0" Jul 11 00:26:43.802424 containerd[1542]: 2025-07-11 00:26:43.776 [INFO][5232] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7bb3b876160f9d9d00b7bd998719811af844addeb1c2c1e387cff10854540146" Namespace="calico-system" Pod="csi-node-driver-79v4x" WorkloadEndpoint="localhost-k8s-csi--node--driver--79v4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--79v4x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6af0ab6f-2e3c-4302-a4e5-e8d8f53a43ab", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-79v4x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie0abd4fc2dd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:26:43.802424 containerd[1542]: 2025-07-11 00:26:43.776 [INFO][5232] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="7bb3b876160f9d9d00b7bd998719811af844addeb1c2c1e387cff10854540146" Namespace="calico-system" Pod="csi-node-driver-79v4x" WorkloadEndpoint="localhost-k8s-csi--node--driver--79v4x-eth0" Jul 11 00:26:43.802424 containerd[1542]: 2025-07-11 00:26:43.776 [INFO][5232] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie0abd4fc2dd ContainerID="7bb3b876160f9d9d00b7bd998719811af844addeb1c2c1e387cff10854540146" Namespace="calico-system" Pod="csi-node-driver-79v4x" WorkloadEndpoint="localhost-k8s-csi--node--driver--79v4x-eth0" Jul 11 00:26:43.802424 containerd[1542]: 2025-07-11 00:26:43.778 [INFO][5232] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7bb3b876160f9d9d00b7bd998719811af844addeb1c2c1e387cff10854540146" Namespace="calico-system" Pod="csi-node-driver-79v4x" WorkloadEndpoint="localhost-k8s-csi--node--driver--79v4x-eth0" Jul 11 00:26:43.802424 containerd[1542]: 2025-07-11 00:26:43.778 [INFO][5232] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7bb3b876160f9d9d00b7bd998719811af844addeb1c2c1e387cff10854540146" Namespace="calico-system" Pod="csi-node-driver-79v4x" WorkloadEndpoint="localhost-k8s-csi--node--driver--79v4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--79v4x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6af0ab6f-2e3c-4302-a4e5-e8d8f53a43ab", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7bb3b876160f9d9d00b7bd998719811af844addeb1c2c1e387cff10854540146", Pod:"csi-node-driver-79v4x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie0abd4fc2dd", MAC:"fa:99:4b:61:75:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:26:43.802424 containerd[1542]: 2025-07-11 00:26:43.793 [INFO][5232] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7bb3b876160f9d9d00b7bd998719811af844addeb1c2c1e387cff10854540146" Namespace="calico-system" Pod="csi-node-driver-79v4x" WorkloadEndpoint="localhost-k8s-csi--node--driver--79v4x-eth0" Jul 11 00:26:43.822920 containerd[1542]: time="2025-07-11T00:26:43.822550902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:26:43.822920 containerd[1542]: time="2025-07-11T00:26:43.822627022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:26:43.822920 containerd[1542]: time="2025-07-11T00:26:43.822654102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:43.822920 containerd[1542]: time="2025-07-11T00:26:43.822760982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:26:43.845048 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:26:43.855493 containerd[1542]: time="2025-07-11T00:26:43.855394338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-79v4x,Uid:6af0ab6f-2e3c-4302-a4e5-e8d8f53a43ab,Namespace:calico-system,Attempt:1,} returns sandbox id \"7bb3b876160f9d9d00b7bd998719811af844addeb1c2c1e387cff10854540146\"" Jul 11 00:26:43.879009 systemd[1]: run-netns-cni\x2ddac3286a\x2daf04\x2d1af4\x2df1aa\x2d36b416f539ca.mount: Deactivated successfully. Jul 11 00:26:44.081325 systemd-networkd[1230]: cali9758fb15c91: Gained IPv6LL Jul 11 00:26:44.803496 kubelet[2609]: I0711 00:26:44.803471 2609 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:26:44.803975 kubelet[2609]: E0711 00:26:44.803723 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:44.803975 kubelet[2609]: E0711 00:26:44.803758 2609 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:26:44.848025 systemd-networkd[1230]: cali39cbe2affa0: Gained IPv6LL Jul 11 00:26:45.185955 containerd[1542]: time="2025-07-11T00:26:45.185906411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:45.187629 containerd[1542]: time="2025-07-11T00:26:45.187593809Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 11 00:26:45.189739 containerd[1542]: time="2025-07-11T00:26:45.188686208Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:45.190890 containerd[1542]: time="2025-07-11T00:26:45.190847285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:45.198538 containerd[1542]: time="2025-07-11T00:26:45.198495676Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.878614253s" Jul 11 00:26:45.198663 containerd[1542]: time="2025-07-11T00:26:45.198645436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 11 00:26:45.200155 containerd[1542]: time="2025-07-11T00:26:45.200123834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 11 00:26:45.200820 containerd[1542]: time="2025-07-11T00:26:45.200777753Z" level=info msg="CreateContainer within sandbox \"4d0bf605fffcace2a930ca342d4fc3e155eb4d93e00d52115abe64466c5424cc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 00:26:45.217200 containerd[1542]: time="2025-07-11T00:26:45.217151334Z" level=info msg="CreateContainer within sandbox \"4d0bf605fffcace2a930ca342d4fc3e155eb4d93e00d52115abe64466c5424cc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1cde6882f459b7678ff732d22480c84e8ae47a753f0e113f2cf3eaef671da2ac\"" Jul 11 00:26:45.218011 containerd[1542]: time="2025-07-11T00:26:45.217967573Z" level=info msg="StartContainer for \"1cde6882f459b7678ff732d22480c84e8ae47a753f0e113f2cf3eaef671da2ac\"" Jul 11 00:26:45.284984 containerd[1542]: time="2025-07-11T00:26:45.284932053Z" level=info msg="StartContainer for \"1cde6882f459b7678ff732d22480c84e8ae47a753f0e113f2cf3eaef671da2ac\" returns successfully" Jul 11 00:26:45.357273 kubelet[2609]: I0711 00:26:45.357195 2609 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:26:45.450569 kubelet[2609]: I0711 00:26:45.447961 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-4smck" podStartSLOduration=23.081845202 podStartE2EDuration="26.447943219s" podCreationTimestamp="2025-07-11 00:26:19 +0000 UTC" firstStartedPulling="2025-07-11 00:26:39.953604326 +0000 UTC m=+41.443161904" lastFinishedPulling="2025-07-11 00:26:43.319702303 +0000 UTC m=+44.809259921" observedRunningTime="2025-07-11 00:26:43.806412164 +0000 UTC m=+45.295969782" watchObservedRunningTime="2025-07-11 00:26:45.447943219 +0000 UTC m=+46.937500797" Jul 11 00:26:45.455963 containerd[1542]: time="2025-07-11T00:26:45.455873130Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:45.457083 containerd[1542]: time="2025-07-11T00:26:45.457053768Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 11 00:26:45.463918 containerd[1542]: time="2025-07-11T00:26:45.463873640Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 263.171647ms" Jul 11 00:26:45.463918 containerd[1542]: time="2025-07-11T00:26:45.463915560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 11 00:26:45.467236 containerd[1542]: time="2025-07-11T00:26:45.467196116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 11 00:26:45.470921 containerd[1542]: time="2025-07-11T00:26:45.470882952Z" level=info msg="CreateContainer within sandbox \"1867e5c5a911f503da982d4b6ab4367483de36c1b18795f11529cdcc9831e1d9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 00:26:45.481492 containerd[1542]: time="2025-07-11T00:26:45.481441259Z" level=info msg="CreateContainer within sandbox \"1867e5c5a911f503da982d4b6ab4367483de36c1b18795f11529cdcc9831e1d9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"602e15f74f21b38db8ed2b2935e1d54e70d76ce38b9e4ae80a91d0ffa11ca79d\"" Jul 11 00:26:45.482006 containerd[1542]: time="2025-07-11T00:26:45.481980379Z" level=info msg="StartContainer for \"602e15f74f21b38db8ed2b2935e1d54e70d76ce38b9e4ae80a91d0ffa11ca79d\"" Jul 11 00:26:45.488964 systemd-networkd[1230]: calie0abd4fc2dd: Gained IPv6LL Jul 11 00:26:45.546505 containerd[1542]: time="2025-07-11T00:26:45.546457542Z" level=info msg="StartContainer for \"602e15f74f21b38db8ed2b2935e1d54e70d76ce38b9e4ae80a91d0ffa11ca79d\" returns successfully" Jul 11 00:26:45.821766 kubelet[2609]: I0711 00:26:45.821625 2609 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:26:45.829081 kubelet[2609]: I0711 00:26:45.828999 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-76d459d97b-9692q" podStartSLOduration=27.408540323 podStartE2EDuration="29.828980766s" podCreationTimestamp="2025-07-11 00:26:16 +0000 UTC" firstStartedPulling="2025-07-11 00:26:43.045217555 +0000 UTC m=+44.534775133" lastFinishedPulling="2025-07-11 00:26:45.465657958 +0000 UTC m=+46.955215576" observedRunningTime="2025-07-11 00:26:45.827704807 +0000 UTC m=+47.317262425" watchObservedRunningTime="2025-07-11 00:26:45.828980766 +0000 UTC m=+47.318538384" Jul 11 00:26:45.863297 systemd[1]: Started sshd@8-10.0.0.118:22-10.0.0.1:37294.service - OpenSSH per-connection server daemon (10.0.0.1:37294). Jul 11 00:26:45.947432 sshd[5454]: Accepted publickey for core from 10.0.0.1 port 37294 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:26:45.951391 sshd[5454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:26:45.957798 systemd-logind[1524]: New session 9 of user core. Jul 11 00:26:45.963135 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 11 00:26:46.011541 kubelet[2609]: I0711 00:26:46.011474 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-76d459d97b-xlnxl" podStartSLOduration=27.082549464 podStartE2EDuration="30.011455229s" podCreationTimestamp="2025-07-11 00:26:16 +0000 UTC" firstStartedPulling="2025-07-11 00:26:42.27052023 +0000 UTC m=+43.760077808" lastFinishedPulling="2025-07-11 00:26:45.199425915 +0000 UTC m=+46.688983573" observedRunningTime="2025-07-11 00:26:45.849428301 +0000 UTC m=+47.338985919" watchObservedRunningTime="2025-07-11 00:26:46.011455229 +0000 UTC m=+47.501012847" Jul 11 00:26:46.229353 sshd[5454]: pam_unix(sshd:session): session closed for user core Jul 11 00:26:46.233325 systemd[1]: sshd@8-10.0.0.118:22-10.0.0.1:37294.service: Deactivated successfully. Jul 11 00:26:46.235790 systemd[1]: session-9.scope: Deactivated successfully. Jul 11 00:26:46.235965 systemd-logind[1524]: Session 9 logged out. Waiting for processes to exit. Jul 11 00:26:46.239322 systemd-logind[1524]: Removed session 9. Jul 11 00:26:46.758077 containerd[1542]: time="2025-07-11T00:26:46.758023836Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:46.759016 containerd[1542]: time="2025-07-11T00:26:46.758984355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 11 00:26:46.765530 containerd[1542]: time="2025-07-11T00:26:46.765479308Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:46.767601 containerd[1542]: time="2025-07-11T00:26:46.767548146Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:46.769795 containerd[1542]: time="2025-07-11T00:26:46.769733383Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.302491227s" Jul 11 00:26:46.769901 containerd[1542]: time="2025-07-11T00:26:46.769799623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 11 00:26:46.773469 containerd[1542]: time="2025-07-11T00:26:46.772985980Z" level=info msg="CreateContainer within sandbox \"7bb3b876160f9d9d00b7bd998719811af844addeb1c2c1e387cff10854540146\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 11 00:26:46.789453 containerd[1542]: time="2025-07-11T00:26:46.789118642Z" level=info msg="CreateContainer within sandbox \"7bb3b876160f9d9d00b7bd998719811af844addeb1c2c1e387cff10854540146\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8e406f221843608a7e9a245f6e8df155d396f8c2637912150c246026d5849aa0\"" Jul 11 00:26:46.790294 containerd[1542]: time="2025-07-11T00:26:46.790175760Z" level=info msg="StartContainer for \"8e406f221843608a7e9a245f6e8df155d396f8c2637912150c246026d5849aa0\"" Jul 11 00:26:46.823325 kubelet[2609]: I0711 00:26:46.822988 2609 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:26:46.853210 containerd[1542]: time="2025-07-11T00:26:46.853143530Z" level=info msg="StartContainer for \"8e406f221843608a7e9a245f6e8df155d396f8c2637912150c246026d5849aa0\" returns successfully" Jul 11 00:26:46.854755 containerd[1542]: time="2025-07-11T00:26:46.854672808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 11 00:26:48.041151 containerd[1542]: time="2025-07-11T00:26:48.040708960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:48.041943 containerd[1542]: time="2025-07-11T00:26:48.041771719Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 11 00:26:48.043973 containerd[1542]: time="2025-07-11T00:26:48.042644198Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:48.045349 containerd[1542]: time="2025-07-11T00:26:48.045315716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:26:48.046843 containerd[1542]: time="2025-07-11T00:26:48.046786434Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.192067786s" Jul 11 00:26:48.046891 containerd[1542]: time="2025-07-11T00:26:48.046840474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 11 00:26:48.049693 containerd[1542]: time="2025-07-11T00:26:48.049651431Z" level=info msg="CreateContainer within sandbox \"7bb3b876160f9d9d00b7bd998719811af844addeb1c2c1e387cff10854540146\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 11 00:26:48.063508 containerd[1542]: time="2025-07-11T00:26:48.063449978Z" level=info msg="CreateContainer within sandbox \"7bb3b876160f9d9d00b7bd998719811af844addeb1c2c1e387cff10854540146\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2b0b1d91a0f54766584dc040d7cc1b3f5c33e26c81ce5d757b07da2ea8b3b875\"" Jul 11 00:26:48.065948 containerd[1542]: time="2025-07-11T00:26:48.065914335Z" level=info msg="StartContainer for \"2b0b1d91a0f54766584dc040d7cc1b3f5c33e26c81ce5d757b07da2ea8b3b875\"" Jul 11 00:26:48.151199 containerd[1542]: time="2025-07-11T00:26:48.151154852Z" level=info msg="StartContainer for \"2b0b1d91a0f54766584dc040d7cc1b3f5c33e26c81ce5d757b07da2ea8b3b875\" returns successfully" Jul 11 00:26:48.675451 kubelet[2609]: I0711 00:26:48.675351 2609 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 11 00:26:48.677890 kubelet[2609]: I0711 00:26:48.677858 2609 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 11 00:26:50.628175 kubelet[2609]: I0711 00:26:50.627665 2609 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-79v4x" podStartSLOduration=26.43739398 podStartE2EDuration="30.627647359s" podCreationTimestamp="2025-07-11 00:26:20 +0000 UTC" firstStartedPulling="2025-07-11 00:26:43.857224895 +0000 UTC m=+45.346782513" lastFinishedPulling="2025-07-11 00:26:48.047478274 +0000 UTC m=+49.537035892" observedRunningTime="2025-07-11 00:26:48.850731206 +0000 UTC m=+50.340288824" watchObservedRunningTime="2025-07-11 00:26:50.627647359 +0000 UTC m=+52.117204977" Jul 11 00:26:51.241152 systemd[1]: Started sshd@9-10.0.0.118:22-10.0.0.1:37306.service - OpenSSH per-connection server daemon (10.0.0.1:37306). Jul 11 00:26:51.281565 sshd[5617]: Accepted publickey for core from 10.0.0.1 port 37306 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:26:51.283400 sshd[5617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:26:51.287222 systemd-logind[1524]: New session 10 of user core. Jul 11 00:26:51.291113 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 11 00:26:51.517554 sshd[5617]: pam_unix(sshd:session): session closed for user core Jul 11 00:26:51.528140 systemd[1]: Started sshd@10-10.0.0.118:22-10.0.0.1:37316.service - OpenSSH per-connection server daemon (10.0.0.1:37316). Jul 11 00:26:51.528566 systemd[1]: sshd@9-10.0.0.118:22-10.0.0.1:37306.service: Deactivated successfully. Jul 11 00:26:51.532974 systemd-logind[1524]: Session 10 logged out. Waiting for processes to exit. Jul 11 00:26:51.533612 systemd[1]: session-10.scope: Deactivated successfully. Jul 11 00:26:51.535328 systemd-logind[1524]: Removed session 10. Jul 11 00:26:51.563932 sshd[5630]: Accepted publickey for core from 10.0.0.1 port 37316 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:26:51.565197 sshd[5630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:26:51.569057 systemd-logind[1524]: New session 11 of user core. Jul 11 00:26:51.576121 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 11 00:26:51.817046 sshd[5630]: pam_unix(sshd:session): session closed for user core Jul 11 00:26:51.824581 systemd[1]: Started sshd@11-10.0.0.118:22-10.0.0.1:37322.service - OpenSSH per-connection server daemon (10.0.0.1:37322). Jul 11 00:26:51.825063 systemd[1]: sshd@10-10.0.0.118:22-10.0.0.1:37316.service: Deactivated successfully. Jul 11 00:26:51.832349 systemd[1]: session-11.scope: Deactivated successfully. Jul 11 00:26:51.835690 systemd-logind[1524]: Session 11 logged out. Waiting for processes to exit. Jul 11 00:26:51.843639 systemd-logind[1524]: Removed session 11. Jul 11 00:26:51.874764 sshd[5646]: Accepted publickey for core from 10.0.0.1 port 37322 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:26:51.876382 sshd[5646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:26:51.880796 systemd-logind[1524]: New session 12 of user core. Jul 11 00:26:51.886272 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 11 00:26:52.050956 sshd[5646]: pam_unix(sshd:session): session closed for user core Jul 11 00:26:52.054163 systemd[1]: sshd@11-10.0.0.118:22-10.0.0.1:37322.service: Deactivated successfully. Jul 11 00:26:52.057576 systemd[1]: session-12.scope: Deactivated successfully. Jul 11 00:26:52.058268 systemd-logind[1524]: Session 12 logged out. Waiting for processes to exit. Jul 11 00:26:52.059093 systemd-logind[1524]: Removed session 12. Jul 11 00:26:57.061058 systemd[1]: Started sshd@12-10.0.0.118:22-10.0.0.1:44714.service - OpenSSH per-connection server daemon (10.0.0.1:44714). Jul 11 00:26:57.091101 sshd[5674]: Accepted publickey for core from 10.0.0.1 port 44714 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:26:57.092329 sshd[5674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:26:57.095773 systemd-logind[1524]: New session 13 of user core. Jul 11 00:26:57.100059 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 11 00:26:57.243965 sshd[5674]: pam_unix(sshd:session): session closed for user core Jul 11 00:26:57.254183 systemd[1]: Started sshd@13-10.0.0.118:22-10.0.0.1:44726.service - OpenSSH per-connection server daemon (10.0.0.1:44726). Jul 11 00:26:57.254570 systemd[1]: sshd@12-10.0.0.118:22-10.0.0.1:44714.service: Deactivated successfully. Jul 11 00:26:57.257136 systemd[1]: session-13.scope: Deactivated successfully. Jul 11 00:26:57.257958 systemd-logind[1524]: Session 13 logged out. Waiting for processes to exit. Jul 11 00:26:57.259358 systemd-logind[1524]: Removed session 13. Jul 11 00:26:57.286702 sshd[5687]: Accepted publickey for core from 10.0.0.1 port 44726 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:26:57.288145 sshd[5687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:26:57.292633 systemd-logind[1524]: New session 14 of user core. Jul 11 00:26:57.299090 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 11 00:26:57.511227 sshd[5687]: pam_unix(sshd:session): session closed for user core Jul 11 00:26:57.520245 systemd[1]: Started sshd@14-10.0.0.118:22-10.0.0.1:44730.service - OpenSSH per-connection server daemon (10.0.0.1:44730). Jul 11 00:26:57.520649 systemd[1]: sshd@13-10.0.0.118:22-10.0.0.1:44726.service: Deactivated successfully. Jul 11 00:26:57.524158 systemd-logind[1524]: Session 14 logged out. Waiting for processes to exit. Jul 11 00:26:57.524340 systemd[1]: session-14.scope: Deactivated successfully. Jul 11 00:26:57.531185 systemd-logind[1524]: Removed session 14. Jul 11 00:26:57.560338 sshd[5701]: Accepted publickey for core from 10.0.0.1 port 44730 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:26:57.561723 sshd[5701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:26:57.566422 systemd-logind[1524]: New session 15 of user core. Jul 11 00:26:57.577227 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 11 00:26:58.590513 containerd[1542]: time="2025-07-11T00:26:58.590472435Z" level=info msg="StopPodSandbox for \"691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c\"" Jul 11 00:26:58.679457 containerd[1542]: 2025-07-11 00:26:58.635 [WARNING][5729] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76d459d97b--9692q-eth0", GenerateName:"calico-apiserver-76d459d97b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f54b44fa-bd20-4c12-91fe-34f4011b849a", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76d459d97b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1867e5c5a911f503da982d4b6ab4367483de36c1b18795f11529cdcc9831e1d9", Pod:"calico-apiserver-76d459d97b-9692q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali39cbe2affa0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:26:58.679457 containerd[1542]: 2025-07-11 00:26:58.635 [INFO][5729] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" Jul 11 00:26:58.679457 containerd[1542]: 2025-07-11 00:26:58.635 [INFO][5729] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" iface="eth0" netns="" Jul 11 00:26:58.679457 containerd[1542]: 2025-07-11 00:26:58.635 [INFO][5729] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" Jul 11 00:26:58.679457 containerd[1542]: 2025-07-11 00:26:58.635 [INFO][5729] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" Jul 11 00:26:58.679457 containerd[1542]: 2025-07-11 00:26:58.663 [INFO][5740] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" HandleID="k8s-pod-network.691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" Workload="localhost-k8s-calico--apiserver--76d459d97b--9692q-eth0" Jul 11 00:26:58.679457 containerd[1542]: 2025-07-11 00:26:58.663 [INFO][5740] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:26:58.679457 containerd[1542]: 2025-07-11 00:26:58.663 [INFO][5740] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:26:58.679457 containerd[1542]: 2025-07-11 00:26:58.672 [WARNING][5740] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" HandleID="k8s-pod-network.691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" Workload="localhost-k8s-calico--apiserver--76d459d97b--9692q-eth0" Jul 11 00:26:58.679457 containerd[1542]: 2025-07-11 00:26:58.672 [INFO][5740] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" HandleID="k8s-pod-network.691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" Workload="localhost-k8s-calico--apiserver--76d459d97b--9692q-eth0" Jul 11 00:26:58.679457 containerd[1542]: 2025-07-11 00:26:58.675 [INFO][5740] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:26:58.679457 containerd[1542]: 2025-07-11 00:26:58.677 [INFO][5729] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" Jul 11 00:26:58.679984 containerd[1542]: time="2025-07-11T00:26:58.679508909Z" level=info msg="TearDown network for sandbox \"691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c\" successfully" Jul 11 00:26:58.679984 containerd[1542]: time="2025-07-11T00:26:58.679539149Z" level=info msg="StopPodSandbox for \"691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c\" returns successfully" Jul 11 00:26:58.680164 containerd[1542]: time="2025-07-11T00:26:58.680134828Z" level=info msg="RemovePodSandbox for \"691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c\"" Jul 11 00:26:58.692236 containerd[1542]: time="2025-07-11T00:26:58.692160782Z" level=info msg="Forcibly stopping sandbox \"691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c\"" Jul 11 00:26:58.826896 containerd[1542]: 2025-07-11 00:26:58.734 [WARNING][5758] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76d459d97b--9692q-eth0", GenerateName:"calico-apiserver-76d459d97b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f54b44fa-bd20-4c12-91fe-34f4011b849a", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76d459d97b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1867e5c5a911f503da982d4b6ab4367483de36c1b18795f11529cdcc9831e1d9", Pod:"calico-apiserver-76d459d97b-9692q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali39cbe2affa0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:26:58.826896 containerd[1542]: 2025-07-11 00:26:58.735 [INFO][5758] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" Jul 11 00:26:58.826896 containerd[1542]: 2025-07-11 00:26:58.735 [INFO][5758] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" iface="eth0" netns="" Jul 11 00:26:58.826896 containerd[1542]: 2025-07-11 00:26:58.735 [INFO][5758] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" Jul 11 00:26:58.826896 containerd[1542]: 2025-07-11 00:26:58.735 [INFO][5758] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" Jul 11 00:26:58.826896 containerd[1542]: 2025-07-11 00:26:58.808 [INFO][5766] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" HandleID="k8s-pod-network.691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" Workload="localhost-k8s-calico--apiserver--76d459d97b--9692q-eth0" Jul 11 00:26:58.826896 containerd[1542]: 2025-07-11 00:26:58.809 [INFO][5766] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:26:58.826896 containerd[1542]: 2025-07-11 00:26:58.809 [INFO][5766] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:26:58.826896 containerd[1542]: 2025-07-11 00:26:58.817 [WARNING][5766] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" HandleID="k8s-pod-network.691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" Workload="localhost-k8s-calico--apiserver--76d459d97b--9692q-eth0" Jul 11 00:26:58.826896 containerd[1542]: 2025-07-11 00:26:58.817 [INFO][5766] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" HandleID="k8s-pod-network.691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" Workload="localhost-k8s-calico--apiserver--76d459d97b--9692q-eth0" Jul 11 00:26:58.826896 containerd[1542]: 2025-07-11 00:26:58.823 [INFO][5766] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:26:58.826896 containerd[1542]: 2025-07-11 00:26:58.825 [INFO][5758] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c" Jul 11 00:26:58.827319 containerd[1542]: time="2025-07-11T00:26:58.826939913Z" level=info msg="TearDown network for sandbox \"691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c\" successfully" Jul 11 00:26:58.851143 containerd[1542]: time="2025-07-11T00:26:58.849436301Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:26:58.851143 containerd[1542]: time="2025-07-11T00:26:58.849524461Z" level=info msg="RemovePodSandbox \"691a1ef6b87c86ef707b1f4a559550e7cb396ee50f525a7c622b5bdacb6f8b0c\" returns successfully" Jul 11 00:26:58.851143 containerd[1542]: time="2025-07-11T00:26:58.850276541Z" level=info msg="StopPodSandbox for \"8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241\"" Jul 11 00:26:58.945234 containerd[1542]: 2025-07-11 00:26:58.895 [WARNING][5782] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--4smck-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"2771ca54-4f98-4224-98e6-fa1c41a6b452", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3072e21ae63f85650b74cef603633a232e3601786257e6edb73350d02b1fdb4c", Pod:"goldmane-58fd7646b9-4smck", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali990a5d7ad6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:26:58.945234 containerd[1542]: 2025-07-11 00:26:58.895 [INFO][5782] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" Jul 11 00:26:58.945234 containerd[1542]: 2025-07-11 00:26:58.895 [INFO][5782] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" iface="eth0" netns="" Jul 11 00:26:58.945234 containerd[1542]: 2025-07-11 00:26:58.895 [INFO][5782] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" Jul 11 00:26:58.945234 containerd[1542]: 2025-07-11 00:26:58.895 [INFO][5782] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" Jul 11 00:26:58.945234 containerd[1542]: 2025-07-11 00:26:58.923 [INFO][5790] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" HandleID="k8s-pod-network.8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" Workload="localhost-k8s-goldmane--58fd7646b9--4smck-eth0" Jul 11 00:26:58.945234 containerd[1542]: 2025-07-11 00:26:58.923 [INFO][5790] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:26:58.945234 containerd[1542]: 2025-07-11 00:26:58.923 [INFO][5790] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:26:58.945234 containerd[1542]: 2025-07-11 00:26:58.938 [WARNING][5790] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" HandleID="k8s-pod-network.8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" Workload="localhost-k8s-goldmane--58fd7646b9--4smck-eth0" Jul 11 00:26:58.945234 containerd[1542]: 2025-07-11 00:26:58.938 [INFO][5790] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" HandleID="k8s-pod-network.8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" Workload="localhost-k8s-goldmane--58fd7646b9--4smck-eth0" Jul 11 00:26:58.945234 containerd[1542]: 2025-07-11 00:26:58.940 [INFO][5790] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:26:58.945234 containerd[1542]: 2025-07-11 00:26:58.942 [INFO][5782] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" Jul 11 00:26:58.945234 containerd[1542]: time="2025-07-11T00:26:58.944478693Z" level=info msg="TearDown network for sandbox \"8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241\" successfully" Jul 11 00:26:58.945234 containerd[1542]: time="2025-07-11T00:26:58.944505933Z" level=info msg="StopPodSandbox for \"8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241\" returns successfully" Jul 11 00:26:58.948133 containerd[1542]: time="2025-07-11T00:26:58.945515132Z" level=info msg="RemovePodSandbox for \"8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241\"" Jul 11 00:26:58.948133 containerd[1542]: time="2025-07-11T00:26:58.945546172Z" level=info msg="Forcibly stopping sandbox \"8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241\"" Jul 11 00:26:59.045289 containerd[1542]: 2025-07-11 00:26:58.997 [WARNING][5807] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--4smck-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"2771ca54-4f98-4224-98e6-fa1c41a6b452", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3072e21ae63f85650b74cef603633a232e3601786257e6edb73350d02b1fdb4c", Pod:"goldmane-58fd7646b9-4smck", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali990a5d7ad6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:26:59.045289 containerd[1542]: 2025-07-11 00:26:58.997 [INFO][5807] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" Jul 11 00:26:59.045289 containerd[1542]: 2025-07-11 00:26:58.997 [INFO][5807] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" iface="eth0" netns="" Jul 11 00:26:59.045289 containerd[1542]: 2025-07-11 00:26:58.997 [INFO][5807] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" Jul 11 00:26:59.045289 containerd[1542]: 2025-07-11 00:26:58.997 [INFO][5807] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" Jul 11 00:26:59.045289 containerd[1542]: 2025-07-11 00:26:59.030 [INFO][5817] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" HandleID="k8s-pod-network.8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" Workload="localhost-k8s-goldmane--58fd7646b9--4smck-eth0" Jul 11 00:26:59.045289 containerd[1542]: 2025-07-11 00:26:59.030 [INFO][5817] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:26:59.045289 containerd[1542]: 2025-07-11 00:26:59.030 [INFO][5817] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:26:59.045289 containerd[1542]: 2025-07-11 00:26:59.039 [WARNING][5817] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" HandleID="k8s-pod-network.8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" Workload="localhost-k8s-goldmane--58fd7646b9--4smck-eth0" Jul 11 00:26:59.045289 containerd[1542]: 2025-07-11 00:26:59.039 [INFO][5817] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" HandleID="k8s-pod-network.8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" Workload="localhost-k8s-goldmane--58fd7646b9--4smck-eth0" Jul 11 00:26:59.045289 containerd[1542]: 2025-07-11 00:26:59.041 [INFO][5817] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:26:59.045289 containerd[1542]: 2025-07-11 00:26:59.043 [INFO][5807] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241" Jul 11 00:26:59.045689 containerd[1542]: time="2025-07-11T00:26:59.045324682Z" level=info msg="TearDown network for sandbox \"8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241\" successfully" Jul 11 00:26:59.069872 containerd[1542]: time="2025-07-11T00:26:59.069805150Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:26:59.070013 containerd[1542]: time="2025-07-11T00:26:59.069903110Z" level=info msg="RemovePodSandbox \"8f44c125794362276342226f49aa2fe1772bf4c30d58f69f0ab5e1ab01894241\" returns successfully" Jul 11 00:26:59.070366 containerd[1542]: time="2025-07-11T00:26:59.070330830Z" level=info msg="StopPodSandbox for \"ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084\"" Jul 11 00:26:59.166008 containerd[1542]: 2025-07-11 00:26:59.121 [WARNING][5835] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--qjjwl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"65271ea6-5134-4ff0-a88f-9119ebccb488", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2250d33d8890a14be23d2ff79f9fbfcf1fb60683c8ac115f7ba1a87bc22efc79", Pod:"coredns-7c65d6cfc9-qjjwl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc024b69f44", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:26:59.166008 containerd[1542]: 2025-07-11 00:26:59.121 [INFO][5835] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" Jul 11 00:26:59.166008 containerd[1542]: 2025-07-11 00:26:59.121 [INFO][5835] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" iface="eth0" netns="" Jul 11 00:26:59.166008 containerd[1542]: 2025-07-11 00:26:59.121 [INFO][5835] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" Jul 11 00:26:59.166008 containerd[1542]: 2025-07-11 00:26:59.121 [INFO][5835] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" Jul 11 00:26:59.166008 containerd[1542]: 2025-07-11 00:26:59.145 [INFO][5843] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" HandleID="k8s-pod-network.ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" Workload="localhost-k8s-coredns--7c65d6cfc9--qjjwl-eth0" Jul 11 00:26:59.166008 containerd[1542]: 2025-07-11 00:26:59.145 [INFO][5843] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:26:59.166008 containerd[1542]: 2025-07-11 00:26:59.145 [INFO][5843] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:26:59.166008 containerd[1542]: 2025-07-11 00:26:59.156 [WARNING][5843] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" HandleID="k8s-pod-network.ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" Workload="localhost-k8s-coredns--7c65d6cfc9--qjjwl-eth0" Jul 11 00:26:59.166008 containerd[1542]: 2025-07-11 00:26:59.156 [INFO][5843] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" HandleID="k8s-pod-network.ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" Workload="localhost-k8s-coredns--7c65d6cfc9--qjjwl-eth0" Jul 11 00:26:59.166008 containerd[1542]: 2025-07-11 00:26:59.158 [INFO][5843] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:26:59.166008 containerd[1542]: 2025-07-11 00:26:59.161 [INFO][5835] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" Jul 11 00:26:59.166008 containerd[1542]: time="2025-07-11T00:26:59.165986184Z" level=info msg="TearDown network for sandbox \"ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084\" successfully" Jul 11 00:26:59.166924 containerd[1542]: time="2025-07-11T00:26:59.166015984Z" level=info msg="StopPodSandbox for \"ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084\" returns successfully" Jul 11 00:26:59.167105 containerd[1542]: time="2025-07-11T00:26:59.167074103Z" level=info msg="RemovePodSandbox for \"ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084\"" Jul 11 00:26:59.167135 containerd[1542]: time="2025-07-11T00:26:59.167112903Z" level=info msg="Forcibly stopping sandbox \"ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084\"" Jul 11 00:26:59.245985 containerd[1542]: 2025-07-11 00:26:59.203 [WARNING][5862] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--qjjwl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"65271ea6-5134-4ff0-a88f-9119ebccb488", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2250d33d8890a14be23d2ff79f9fbfcf1fb60683c8ac115f7ba1a87bc22efc79", Pod:"coredns-7c65d6cfc9-qjjwl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc024b69f44", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:26:59.245985 containerd[1542]: 2025-07-11 00:26:59.204 [INFO][5862] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" Jul 11 00:26:59.245985 containerd[1542]: 2025-07-11 00:26:59.204 [INFO][5862] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" iface="eth0" netns="" Jul 11 00:26:59.245985 containerd[1542]: 2025-07-11 00:26:59.204 [INFO][5862] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" Jul 11 00:26:59.245985 containerd[1542]: 2025-07-11 00:26:59.204 [INFO][5862] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" Jul 11 00:26:59.245985 containerd[1542]: 2025-07-11 00:26:59.228 [INFO][5871] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" HandleID="k8s-pod-network.ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" Workload="localhost-k8s-coredns--7c65d6cfc9--qjjwl-eth0" Jul 11 00:26:59.245985 containerd[1542]: 2025-07-11 00:26:59.229 [INFO][5871] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:26:59.245985 containerd[1542]: 2025-07-11 00:26:59.229 [INFO][5871] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:26:59.245985 containerd[1542]: 2025-07-11 00:26:59.240 [WARNING][5871] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" HandleID="k8s-pod-network.ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" Workload="localhost-k8s-coredns--7c65d6cfc9--qjjwl-eth0" Jul 11 00:26:59.245985 containerd[1542]: 2025-07-11 00:26:59.240 [INFO][5871] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" HandleID="k8s-pod-network.ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" Workload="localhost-k8s-coredns--7c65d6cfc9--qjjwl-eth0" Jul 11 00:26:59.245985 containerd[1542]: 2025-07-11 00:26:59.241 [INFO][5871] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:26:59.245985 containerd[1542]: 2025-07-11 00:26:59.243 [INFO][5862] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084" Jul 11 00:26:59.246387 containerd[1542]: time="2025-07-11T00:26:59.246040225Z" level=info msg="TearDown network for sandbox \"ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084\" successfully" Jul 11 00:26:59.258772 containerd[1542]: time="2025-07-11T00:26:59.258718579Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:26:59.258881 containerd[1542]: time="2025-07-11T00:26:59.258799659Z" level=info msg="RemovePodSandbox \"ca3eff38ed24fa257342922c5fbbf0b8f79d9b2e8f4a37a04f9d7047d2379084\" returns successfully" Jul 11 00:26:59.259452 containerd[1542]: time="2025-07-11T00:26:59.259354819Z" level=info msg="StopPodSandbox for \"21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7\"" Jul 11 00:26:59.351406 containerd[1542]: 2025-07-11 00:26:59.295 [WARNING][5889] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" WorkloadEndpoint="localhost-k8s-whisker--7457bc779--ccvq6-eth0" Jul 11 00:26:59.351406 containerd[1542]: 2025-07-11 00:26:59.296 [INFO][5889] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" Jul 11 00:26:59.351406 containerd[1542]: 2025-07-11 00:26:59.296 [INFO][5889] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" iface="eth0" netns="" Jul 11 00:26:59.351406 containerd[1542]: 2025-07-11 00:26:59.296 [INFO][5889] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" Jul 11 00:26:59.351406 containerd[1542]: 2025-07-11 00:26:59.296 [INFO][5889] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" Jul 11 00:26:59.351406 containerd[1542]: 2025-07-11 00:26:59.328 [INFO][5898] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" HandleID="k8s-pod-network.21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" Workload="localhost-k8s-whisker--7457bc779--ccvq6-eth0" Jul 11 00:26:59.351406 containerd[1542]: 2025-07-11 00:26:59.328 [INFO][5898] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:26:59.351406 containerd[1542]: 2025-07-11 00:26:59.328 [INFO][5898] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:26:59.351406 containerd[1542]: 2025-07-11 00:26:59.339 [WARNING][5898] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" HandleID="k8s-pod-network.21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" Workload="localhost-k8s-whisker--7457bc779--ccvq6-eth0" Jul 11 00:26:59.351406 containerd[1542]: 2025-07-11 00:26:59.339 [INFO][5898] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" HandleID="k8s-pod-network.21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" Workload="localhost-k8s-whisker--7457bc779--ccvq6-eth0" Jul 11 00:26:59.351406 containerd[1542]: 2025-07-11 00:26:59.341 [INFO][5898] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:26:59.351406 containerd[1542]: 2025-07-11 00:26:59.345 [INFO][5889] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" Jul 11 00:26:59.352602 containerd[1542]: time="2025-07-11T00:26:59.351547014Z" level=info msg="TearDown network for sandbox \"21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7\" successfully" Jul 11 00:26:59.352602 containerd[1542]: time="2025-07-11T00:26:59.351573134Z" level=info msg="StopPodSandbox for \"21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7\" returns successfully" Jul 11 00:26:59.352602 containerd[1542]: time="2025-07-11T00:26:59.352409854Z" level=info msg="RemovePodSandbox for \"21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7\"" Jul 11 00:26:59.352602 containerd[1542]: time="2025-07-11T00:26:59.352459654Z" level=info msg="Forcibly stopping sandbox \"21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7\"" Jul 11 00:26:59.371933 sshd[5701]: pam_unix(sshd:session): session closed for user core Jul 11 00:26:59.384335 systemd[1]: Started sshd@15-10.0.0.118:22-10.0.0.1:44736.service - OpenSSH per-connection server daemon (10.0.0.1:44736). Jul 11 00:26:59.389315 systemd[1]: sshd@14-10.0.0.118:22-10.0.0.1:44730.service: Deactivated successfully. Jul 11 00:26:59.401865 systemd[1]: session-15.scope: Deactivated successfully. Jul 11 00:26:59.405798 systemd-logind[1524]: Session 15 logged out. Waiting for processes to exit. Jul 11 00:26:59.410911 systemd-logind[1524]: Removed session 15. Jul 11 00:26:59.448386 sshd[5924]: Accepted publickey for core from 10.0.0.1 port 44736 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:26:59.452328 sshd[5924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:26:59.458909 systemd-logind[1524]: New session 16 of user core. Jul 11 00:26:59.461599 containerd[1542]: 2025-07-11 00:26:59.420 [WARNING][5918] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" WorkloadEndpoint="localhost-k8s-whisker--7457bc779--ccvq6-eth0" Jul 11 00:26:59.461599 containerd[1542]: 2025-07-11 00:26:59.420 [INFO][5918] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" Jul 11 00:26:59.461599 containerd[1542]: 2025-07-11 00:26:59.420 [INFO][5918] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" iface="eth0" netns="" Jul 11 00:26:59.461599 containerd[1542]: 2025-07-11 00:26:59.420 [INFO][5918] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" Jul 11 00:26:59.461599 containerd[1542]: 2025-07-11 00:26:59.420 [INFO][5918] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" Jul 11 00:26:59.461599 containerd[1542]: 2025-07-11 00:26:59.443 [INFO][5934] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" HandleID="k8s-pod-network.21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" Workload="localhost-k8s-whisker--7457bc779--ccvq6-eth0" Jul 11 00:26:59.461599 containerd[1542]: 2025-07-11 00:26:59.443 [INFO][5934] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:26:59.461599 containerd[1542]: 2025-07-11 00:26:59.443 [INFO][5934] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:26:59.461599 containerd[1542]: 2025-07-11 00:26:59.456 [WARNING][5934] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" HandleID="k8s-pod-network.21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" Workload="localhost-k8s-whisker--7457bc779--ccvq6-eth0" Jul 11 00:26:59.461599 containerd[1542]: 2025-07-11 00:26:59.456 [INFO][5934] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" HandleID="k8s-pod-network.21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" Workload="localhost-k8s-whisker--7457bc779--ccvq6-eth0" Jul 11 00:26:59.461599 containerd[1542]: 2025-07-11 00:26:59.457 [INFO][5934] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:26:59.461599 containerd[1542]: 2025-07-11 00:26:59.459 [INFO][5918] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7" Jul 11 00:26:59.461950 containerd[1542]: time="2025-07-11T00:26:59.461636601Z" level=info msg="TearDown network for sandbox \"21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7\" successfully" Jul 11 00:26:59.464580 containerd[1542]: time="2025-07-11T00:26:59.464539280Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:26:59.464636 containerd[1542]: time="2025-07-11T00:26:59.464616160Z" level=info msg="RemovePodSandbox \"21f423c009dab5ce3c437e9802e11faf126ec31b5ef4dd6ff86a7190f547edd7\" returns successfully" Jul 11 00:26:59.465244 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 11 00:26:59.466062 containerd[1542]: time="2025-07-11T00:26:59.466017439Z" level=info msg="StopPodSandbox for \"a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866\"" Jul 11 00:26:59.539891 containerd[1542]: 2025-07-11 00:26:59.499 [WARNING][5953] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bbc6b4cc--xqcp5-eth0", GenerateName:"calico-kube-controllers-6bbc6b4cc-", Namespace:"calico-system", SelfLink:"", UID:"f8b0240e-8f74-491a-84c4-c496b1ecf4cc", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bbc6b4cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2efa2ed076731e77aa84c26de149c5ca2a35b970d361b86318c0a023a90c4c3c", Pod:"calico-kube-controllers-6bbc6b4cc-xqcp5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3e7b680bebf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:26:59.539891 containerd[1542]: 2025-07-11 00:26:59.499 [INFO][5953] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" Jul 11 00:26:59.539891 containerd[1542]: 2025-07-11 00:26:59.499 [INFO][5953] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" iface="eth0" netns="" Jul 11 00:26:59.539891 containerd[1542]: 2025-07-11 00:26:59.499 [INFO][5953] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" Jul 11 00:26:59.539891 containerd[1542]: 2025-07-11 00:26:59.499 [INFO][5953] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" Jul 11 00:26:59.539891 containerd[1542]: 2025-07-11 00:26:59.524 [INFO][5962] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" HandleID="k8s-pod-network.a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" Workload="localhost-k8s-calico--kube--controllers--6bbc6b4cc--xqcp5-eth0" Jul 11 00:26:59.539891 containerd[1542]: 2025-07-11 00:26:59.524 [INFO][5962] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:26:59.539891 containerd[1542]: 2025-07-11 00:26:59.524 [INFO][5962] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:26:59.539891 containerd[1542]: 2025-07-11 00:26:59.534 [WARNING][5962] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" HandleID="k8s-pod-network.a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" Workload="localhost-k8s-calico--kube--controllers--6bbc6b4cc--xqcp5-eth0" Jul 11 00:26:59.539891 containerd[1542]: 2025-07-11 00:26:59.534 [INFO][5962] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" HandleID="k8s-pod-network.a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" Workload="localhost-k8s-calico--kube--controllers--6bbc6b4cc--xqcp5-eth0" Jul 11 00:26:59.539891 containerd[1542]: 2025-07-11 00:26:59.535 [INFO][5962] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:26:59.539891 containerd[1542]: 2025-07-11 00:26:59.537 [INFO][5953] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" Jul 11 00:26:59.540471 containerd[1542]: time="2025-07-11T00:26:59.539935084Z" level=info msg="TearDown network for sandbox \"a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866\" successfully" Jul 11 00:26:59.540471 containerd[1542]: time="2025-07-11T00:26:59.539968484Z" level=info msg="StopPodSandbox for \"a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866\" returns successfully" Jul 11 00:26:59.540894 containerd[1542]: time="2025-07-11T00:26:59.540866843Z" level=info msg="RemovePodSandbox for \"a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866\"" Jul 11 00:26:59.540938 containerd[1542]: time="2025-07-11T00:26:59.540905123Z" level=info msg="Forcibly stopping sandbox \"a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866\"" Jul 11 00:26:59.615274 containerd[1542]: 2025-07-11 00:26:59.576 [WARNING][5979] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bbc6b4cc--xqcp5-eth0", GenerateName:"calico-kube-controllers-6bbc6b4cc-", Namespace:"calico-system", SelfLink:"", UID:"f8b0240e-8f74-491a-84c4-c496b1ecf4cc", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bbc6b4cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2efa2ed076731e77aa84c26de149c5ca2a35b970d361b86318c0a023a90c4c3c", Pod:"calico-kube-controllers-6bbc6b4cc-xqcp5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3e7b680bebf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:26:59.615274 containerd[1542]: 2025-07-11 00:26:59.576 [INFO][5979] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" Jul 11 00:26:59.615274 containerd[1542]: 2025-07-11 00:26:59.576 [INFO][5979] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" iface="eth0" netns="" Jul 11 00:26:59.615274 containerd[1542]: 2025-07-11 00:26:59.576 [INFO][5979] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" Jul 11 00:26:59.615274 containerd[1542]: 2025-07-11 00:26:59.576 [INFO][5979] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" Jul 11 00:26:59.615274 containerd[1542]: 2025-07-11 00:26:59.598 [INFO][5991] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" HandleID="k8s-pod-network.a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" Workload="localhost-k8s-calico--kube--controllers--6bbc6b4cc--xqcp5-eth0" Jul 11 00:26:59.615274 containerd[1542]: 2025-07-11 00:26:59.598 [INFO][5991] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:26:59.615274 containerd[1542]: 2025-07-11 00:26:59.598 [INFO][5991] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:26:59.615274 containerd[1542]: 2025-07-11 00:26:59.606 [WARNING][5991] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" HandleID="k8s-pod-network.a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" Workload="localhost-k8s-calico--kube--controllers--6bbc6b4cc--xqcp5-eth0" Jul 11 00:26:59.615274 containerd[1542]: 2025-07-11 00:26:59.606 [INFO][5991] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" HandleID="k8s-pod-network.a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" Workload="localhost-k8s-calico--kube--controllers--6bbc6b4cc--xqcp5-eth0" Jul 11 00:26:59.615274 containerd[1542]: 2025-07-11 00:26:59.608 [INFO][5991] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:26:59.615274 containerd[1542]: 2025-07-11 00:26:59.610 [INFO][5979] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866" Jul 11 00:26:59.615274 containerd[1542]: time="2025-07-11T00:26:59.613731728Z" level=info msg="TearDown network for sandbox \"a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866\" successfully" Jul 11 00:26:59.617423 containerd[1542]: time="2025-07-11T00:26:59.617385406Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:26:59.617559 containerd[1542]: time="2025-07-11T00:26:59.617542406Z" level=info msg="RemovePodSandbox \"a26559630a1b9457fa7df719b4e4b55ece2c34d306a98feef93ae6f3d12f4866\" returns successfully" Jul 11 00:26:59.618117 containerd[1542]: time="2025-07-11T00:26:59.618087566Z" level=info msg="StopPodSandbox for \"d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e\"" Jul 11 00:26:59.689023 containerd[1542]: 2025-07-11 00:26:59.653 [WARNING][6009] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--79v4x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6af0ab6f-2e3c-4302-a4e5-e8d8f53a43ab", ResourceVersion:"1139", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7bb3b876160f9d9d00b7bd998719811af844addeb1c2c1e387cff10854540146", Pod:"csi-node-driver-79v4x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie0abd4fc2dd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:26:59.689023 containerd[1542]: 2025-07-11 00:26:59.653 [INFO][6009] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" Jul 11 00:26:59.689023 containerd[1542]: 2025-07-11 00:26:59.653 [INFO][6009] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" iface="eth0" netns="" Jul 11 00:26:59.689023 containerd[1542]: 2025-07-11 00:26:59.653 [INFO][6009] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" Jul 11 00:26:59.689023 containerd[1542]: 2025-07-11 00:26:59.653 [INFO][6009] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" Jul 11 00:26:59.689023 containerd[1542]: 2025-07-11 00:26:59.673 [INFO][6020] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" HandleID="k8s-pod-network.d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" Workload="localhost-k8s-csi--node--driver--79v4x-eth0" Jul 11 00:26:59.689023 containerd[1542]: 2025-07-11 00:26:59.673 [INFO][6020] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:26:59.689023 containerd[1542]: 2025-07-11 00:26:59.673 [INFO][6020] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:26:59.689023 containerd[1542]: 2025-07-11 00:26:59.683 [WARNING][6020] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" HandleID="k8s-pod-network.d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" Workload="localhost-k8s-csi--node--driver--79v4x-eth0" Jul 11 00:26:59.689023 containerd[1542]: 2025-07-11 00:26:59.683 [INFO][6020] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" HandleID="k8s-pod-network.d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" Workload="localhost-k8s-csi--node--driver--79v4x-eth0" Jul 11 00:26:59.689023 containerd[1542]: 2025-07-11 00:26:59.685 [INFO][6020] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:26:59.689023 containerd[1542]: 2025-07-11 00:26:59.687 [INFO][6009] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" Jul 11 00:26:59.689417 containerd[1542]: time="2025-07-11T00:26:59.689067252Z" level=info msg="TearDown network for sandbox \"d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e\" successfully" Jul 11 00:26:59.689417 containerd[1542]: time="2025-07-11T00:26:59.689095252Z" level=info msg="StopPodSandbox for \"d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e\" returns successfully" Jul 11 00:26:59.690172 containerd[1542]: time="2025-07-11T00:26:59.689820851Z" level=info msg="RemovePodSandbox for \"d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e\"" Jul 11 00:26:59.690172 containerd[1542]: time="2025-07-11T00:26:59.689884771Z" level=info msg="Forcibly stopping sandbox \"d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e\"" Jul 11 00:26:59.767023 containerd[1542]: 2025-07-11 00:26:59.728 [WARNING][6038] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--79v4x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6af0ab6f-2e3c-4302-a4e5-e8d8f53a43ab", ResourceVersion:"1139", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7bb3b876160f9d9d00b7bd998719811af844addeb1c2c1e387cff10854540146", Pod:"csi-node-driver-79v4x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie0abd4fc2dd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:26:59.767023 containerd[1542]: 2025-07-11 00:26:59.728 [INFO][6038] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" Jul 11 00:26:59.767023 containerd[1542]: 2025-07-11 00:26:59.728 [INFO][6038] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" iface="eth0" netns="" Jul 11 00:26:59.767023 containerd[1542]: 2025-07-11 00:26:59.728 [INFO][6038] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" Jul 11 00:26:59.767023 containerd[1542]: 2025-07-11 00:26:59.728 [INFO][6038] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" Jul 11 00:26:59.767023 containerd[1542]: 2025-07-11 00:26:59.750 [INFO][6047] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" HandleID="k8s-pod-network.d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" Workload="localhost-k8s-csi--node--driver--79v4x-eth0" Jul 11 00:26:59.767023 containerd[1542]: 2025-07-11 00:26:59.750 [INFO][6047] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:26:59.767023 containerd[1542]: 2025-07-11 00:26:59.750 [INFO][6047] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:26:59.767023 containerd[1542]: 2025-07-11 00:26:59.760 [WARNING][6047] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" HandleID="k8s-pod-network.d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" Workload="localhost-k8s-csi--node--driver--79v4x-eth0" Jul 11 00:26:59.767023 containerd[1542]: 2025-07-11 00:26:59.760 [INFO][6047] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" HandleID="k8s-pod-network.d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" Workload="localhost-k8s-csi--node--driver--79v4x-eth0" Jul 11 00:26:59.767023 containerd[1542]: 2025-07-11 00:26:59.763 [INFO][6047] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:26:59.767023 containerd[1542]: 2025-07-11 00:26:59.765 [INFO][6038] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e" Jul 11 00:26:59.767963 containerd[1542]: time="2025-07-11T00:26:59.767450814Z" level=info msg="TearDown network for sandbox \"d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e\" successfully" Jul 11 00:26:59.770751 containerd[1542]: time="2025-07-11T00:26:59.770714572Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:26:59.770930 containerd[1542]: time="2025-07-11T00:26:59.770897052Z" level=info msg="RemovePodSandbox \"d8716c6b3f42fd01e5a1b05a69e980c7c71272777453ffd2fac98448435ff15e\" returns successfully" Jul 11 00:26:59.771491 containerd[1542]: time="2025-07-11T00:26:59.771462812Z" level=info msg="StopPodSandbox for \"4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301\"" Jul 11 00:26:59.848988 containerd[1542]: 2025-07-11 00:26:59.812 [WARNING][6064] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--677wx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"68360d6b-1341-4cc8-9b2e-e1fda0c521fc", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"73b24adcc892549409f3c0d23a34767f4002128559bc40887f52806d196ea543", Pod:"coredns-7c65d6cfc9-677wx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali64d5e4f4734", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:26:59.848988 containerd[1542]: 2025-07-11 00:26:59.812 [INFO][6064] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" Jul 11 00:26:59.848988 containerd[1542]: 2025-07-11 00:26:59.812 [INFO][6064] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" iface="eth0" netns="" Jul 11 00:26:59.848988 containerd[1542]: 2025-07-11 00:26:59.812 [INFO][6064] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" Jul 11 00:26:59.848988 containerd[1542]: 2025-07-11 00:26:59.812 [INFO][6064] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" Jul 11 00:26:59.848988 containerd[1542]: 2025-07-11 00:26:59.832 [INFO][6072] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" HandleID="k8s-pod-network.4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" Workload="localhost-k8s-coredns--7c65d6cfc9--677wx-eth0" Jul 11 00:26:59.848988 containerd[1542]: 2025-07-11 00:26:59.833 [INFO][6072] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:26:59.848988 containerd[1542]: 2025-07-11 00:26:59.833 [INFO][6072] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:26:59.848988 containerd[1542]: 2025-07-11 00:26:59.842 [WARNING][6072] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" HandleID="k8s-pod-network.4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" Workload="localhost-k8s-coredns--7c65d6cfc9--677wx-eth0" Jul 11 00:26:59.848988 containerd[1542]: 2025-07-11 00:26:59.843 [INFO][6072] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" HandleID="k8s-pod-network.4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" Workload="localhost-k8s-coredns--7c65d6cfc9--677wx-eth0" Jul 11 00:26:59.848988 containerd[1542]: 2025-07-11 00:26:59.845 [INFO][6072] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:26:59.848988 containerd[1542]: 2025-07-11 00:26:59.847 [INFO][6064] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" Jul 11 00:26:59.849506 containerd[1542]: time="2025-07-11T00:26:59.849020335Z" level=info msg="TearDown network for sandbox \"4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301\" successfully" Jul 11 00:26:59.849506 containerd[1542]: time="2025-07-11T00:26:59.849045375Z" level=info msg="StopPodSandbox for \"4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301\" returns successfully" Jul 11 00:26:59.850275 containerd[1542]: time="2025-07-11T00:26:59.850243174Z" level=info msg="RemovePodSandbox for \"4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301\"" Jul 11 00:26:59.850343 containerd[1542]: time="2025-07-11T00:26:59.850281934Z" level=info msg="Forcibly stopping sandbox \"4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301\"" Jul 11 00:26:59.921158 containerd[1542]: 2025-07-11 00:26:59.884 [WARNING][6090] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--677wx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"68360d6b-1341-4cc8-9b2e-e1fda0c521fc", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"73b24adcc892549409f3c0d23a34767f4002128559bc40887f52806d196ea543", Pod:"coredns-7c65d6cfc9-677wx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali64d5e4f4734", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:26:59.921158 containerd[1542]: 2025-07-11 00:26:59.884 [INFO][6090] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" Jul 11 00:26:59.921158 containerd[1542]: 2025-07-11 00:26:59.884 [INFO][6090] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" iface="eth0" netns="" Jul 11 00:26:59.921158 containerd[1542]: 2025-07-11 00:26:59.884 [INFO][6090] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" Jul 11 00:26:59.921158 containerd[1542]: 2025-07-11 00:26:59.884 [INFO][6090] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" Jul 11 00:26:59.921158 containerd[1542]: 2025-07-11 00:26:59.906 [INFO][6099] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" HandleID="k8s-pod-network.4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" Workload="localhost-k8s-coredns--7c65d6cfc9--677wx-eth0" Jul 11 00:26:59.921158 containerd[1542]: 2025-07-11 00:26:59.906 [INFO][6099] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:26:59.921158 containerd[1542]: 2025-07-11 00:26:59.906 [INFO][6099] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:26:59.921158 containerd[1542]: 2025-07-11 00:26:59.916 [WARNING][6099] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" HandleID="k8s-pod-network.4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" Workload="localhost-k8s-coredns--7c65d6cfc9--677wx-eth0" Jul 11 00:26:59.921158 containerd[1542]: 2025-07-11 00:26:59.916 [INFO][6099] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" HandleID="k8s-pod-network.4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" Workload="localhost-k8s-coredns--7c65d6cfc9--677wx-eth0" Jul 11 00:26:59.921158 containerd[1542]: 2025-07-11 00:26:59.917 [INFO][6099] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:26:59.921158 containerd[1542]: 2025-07-11 00:26:59.919 [INFO][6090] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301" Jul 11 00:26:59.921578 containerd[1542]: time="2025-07-11T00:26:59.921201620Z" level=info msg="TearDown network for sandbox \"4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301\" successfully" Jul 11 00:26:59.926343 containerd[1542]: time="2025-07-11T00:26:59.926291377Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:26:59.926458 containerd[1542]: time="2025-07-11T00:26:59.926364057Z" level=info msg="RemovePodSandbox \"4c9e71f73b0c74360a780f2de8ddfe6c38a8bf78bd2963013425bfa747107301\" returns successfully" Jul 11 00:26:59.927154 containerd[1542]: time="2025-07-11T00:26:59.927051937Z" level=info msg="StopPodSandbox for \"6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3\"" Jul 11 00:26:59.936478 sshd[5924]: pam_unix(sshd:session): session closed for user core Jul 11 00:26:59.946106 systemd[1]: Started sshd@16-10.0.0.118:22-10.0.0.1:44750.service - OpenSSH per-connection server daemon (10.0.0.1:44750). Jul 11 00:26:59.946797 systemd[1]: sshd@15-10.0.0.118:22-10.0.0.1:44736.service: Deactivated successfully. Jul 11 00:26:59.950200 systemd-logind[1524]: Session 16 logged out. Waiting for processes to exit. Jul 11 00:26:59.950706 systemd[1]: session-16.scope: Deactivated successfully. Jul 11 00:26:59.955720 systemd-logind[1524]: Removed session 16. Jul 11 00:26:59.986705 sshd[6122]: Accepted publickey for core from 10.0.0.1 port 44750 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:26:59.988093 sshd[6122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:26:59.993004 systemd-logind[1524]: New session 17 of user core. Jul 11 00:26:59.999133 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 11 00:27:00.012429 containerd[1542]: 2025-07-11 00:26:59.973 [WARNING][6117] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76d459d97b--xlnxl-eth0", GenerateName:"calico-apiserver-76d459d97b-", Namespace:"calico-apiserver", SelfLink:"", UID:"240131e8-c1af-4198-9629-bd8842d57a9c", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76d459d97b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4d0bf605fffcace2a930ca342d4fc3e155eb4d93e00d52115abe64466c5424cc", Pod:"calico-apiserver-76d459d97b-xlnxl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9758fb15c91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:00.012429 containerd[1542]: 2025-07-11 00:26:59.973 [INFO][6117] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" Jul 11 00:27:00.012429 containerd[1542]: 2025-07-11 00:26:59.973 [INFO][6117] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" iface="eth0" netns="" Jul 11 00:27:00.012429 containerd[1542]: 2025-07-11 00:26:59.973 [INFO][6117] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" Jul 11 00:27:00.012429 containerd[1542]: 2025-07-11 00:26:59.973 [INFO][6117] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" Jul 11 00:27:00.012429 containerd[1542]: 2025-07-11 00:26:59.998 [INFO][6131] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" HandleID="k8s-pod-network.6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" Workload="localhost-k8s-calico--apiserver--76d459d97b--xlnxl-eth0" Jul 11 00:27:00.012429 containerd[1542]: 2025-07-11 00:26:59.998 [INFO][6131] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:00.012429 containerd[1542]: 2025-07-11 00:26:59.998 [INFO][6131] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:00.012429 containerd[1542]: 2025-07-11 00:27:00.007 [WARNING][6131] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" HandleID="k8s-pod-network.6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" Workload="localhost-k8s-calico--apiserver--76d459d97b--xlnxl-eth0" Jul 11 00:27:00.012429 containerd[1542]: 2025-07-11 00:27:00.007 [INFO][6131] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" HandleID="k8s-pod-network.6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" Workload="localhost-k8s-calico--apiserver--76d459d97b--xlnxl-eth0" Jul 11 00:27:00.012429 containerd[1542]: 2025-07-11 00:27:00.009 [INFO][6131] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:00.012429 containerd[1542]: 2025-07-11 00:27:00.010 [INFO][6117] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" Jul 11 00:27:00.013245 containerd[1542]: time="2025-07-11T00:27:00.012714856Z" level=info msg="TearDown network for sandbox \"6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3\" successfully" Jul 11 00:27:00.013245 containerd[1542]: time="2025-07-11T00:27:00.012741256Z" level=info msg="StopPodSandbox for \"6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3\" returns successfully" Jul 11 00:27:00.013245 containerd[1542]: time="2025-07-11T00:27:00.013228736Z" level=info msg="RemovePodSandbox for \"6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3\"" Jul 11 00:27:00.013305 containerd[1542]: time="2025-07-11T00:27:00.013259176Z" level=info msg="Forcibly stopping sandbox \"6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3\"" Jul 11 00:27:00.080940 containerd[1542]: 2025-07-11 00:27:00.045 [WARNING][6151] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76d459d97b--xlnxl-eth0", GenerateName:"calico-apiserver-76d459d97b-", Namespace:"calico-apiserver", SelfLink:"", UID:"240131e8-c1af-4198-9629-bd8842d57a9c", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 26, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76d459d97b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4d0bf605fffcace2a930ca342d4fc3e155eb4d93e00d52115abe64466c5424cc", Pod:"calico-apiserver-76d459d97b-xlnxl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9758fb15c91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:27:00.080940 containerd[1542]: 2025-07-11 00:27:00.045 [INFO][6151] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" Jul 11 00:27:00.080940 containerd[1542]: 2025-07-11 00:27:00.046 [INFO][6151] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" iface="eth0" netns="" Jul 11 00:27:00.080940 containerd[1542]: 2025-07-11 00:27:00.046 [INFO][6151] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" Jul 11 00:27:00.080940 containerd[1542]: 2025-07-11 00:27:00.046 [INFO][6151] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" Jul 11 00:27:00.080940 containerd[1542]: 2025-07-11 00:27:00.067 [INFO][6161] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" HandleID="k8s-pod-network.6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" Workload="localhost-k8s-calico--apiserver--76d459d97b--xlnxl-eth0" Jul 11 00:27:00.080940 containerd[1542]: 2025-07-11 00:27:00.067 [INFO][6161] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:27:00.080940 containerd[1542]: 2025-07-11 00:27:00.067 [INFO][6161] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:27:00.080940 containerd[1542]: 2025-07-11 00:27:00.076 [WARNING][6161] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" HandleID="k8s-pod-network.6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" Workload="localhost-k8s-calico--apiserver--76d459d97b--xlnxl-eth0" Jul 11 00:27:00.080940 containerd[1542]: 2025-07-11 00:27:00.076 [INFO][6161] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" HandleID="k8s-pod-network.6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" Workload="localhost-k8s-calico--apiserver--76d459d97b--xlnxl-eth0" Jul 11 00:27:00.080940 containerd[1542]: 2025-07-11 00:27:00.077 [INFO][6161] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:27:00.080940 containerd[1542]: 2025-07-11 00:27:00.079 [INFO][6151] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3" Jul 11 00:27:00.080940 containerd[1542]: time="2025-07-11T00:27:00.080926745Z" level=info msg="TearDown network for sandbox \"6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3\" successfully" Jul 11 00:27:00.084196 containerd[1542]: time="2025-07-11T00:27:00.084149424Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:27:00.084264 containerd[1542]: time="2025-07-11T00:27:00.084224744Z" level=info msg="RemovePodSandbox \"6198647b9c0e889733347f9c0282c16e8ebdd5aa0d1fb1eaac16f3239a0c6dc3\" returns successfully" Jul 11 00:27:00.136158 sshd[6122]: pam_unix(sshd:session): session closed for user core Jul 11 00:27:00.139450 systemd[1]: sshd@16-10.0.0.118:22-10.0.0.1:44750.service: Deactivated successfully. Jul 11 00:27:00.141429 systemd-logind[1524]: Session 17 logged out. Waiting for processes to exit. Jul 11 00:27:00.141503 systemd[1]: session-17.scope: Deactivated successfully. Jul 11 00:27:00.142339 systemd-logind[1524]: Removed session 17. Jul 11 00:27:05.148099 systemd[1]: Started sshd@17-10.0.0.118:22-10.0.0.1:41782.service - OpenSSH per-connection server daemon (10.0.0.1:41782). Jul 11 00:27:05.186350 sshd[6184]: Accepted publickey for core from 10.0.0.1 port 41782 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:27:05.187846 sshd[6184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:27:05.195428 systemd-logind[1524]: New session 18 of user core. Jul 11 00:27:05.201219 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 11 00:27:05.424097 sshd[6184]: pam_unix(sshd:session): session closed for user core Jul 11 00:27:05.427322 systemd[1]: sshd@17-10.0.0.118:22-10.0.0.1:41782.service: Deactivated successfully. Jul 11 00:27:05.430666 systemd[1]: session-18.scope: Deactivated successfully. Jul 11 00:27:05.431444 systemd-logind[1524]: Session 18 logged out. Waiting for processes to exit. Jul 11 00:27:05.432605 systemd-logind[1524]: Removed session 18. Jul 11 00:27:10.446460 systemd[1]: Started sshd@18-10.0.0.118:22-10.0.0.1:41794.service - OpenSSH per-connection server daemon (10.0.0.1:41794). Jul 11 00:27:10.493074 sshd[6202]: Accepted publickey for core from 10.0.0.1 port 41794 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:27:10.496982 sshd[6202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:27:10.502308 systemd-logind[1524]: New session 19 of user core. Jul 11 00:27:10.509474 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 11 00:27:10.697374 sshd[6202]: pam_unix(sshd:session): session closed for user core Jul 11 00:27:10.703355 systemd[1]: sshd@18-10.0.0.118:22-10.0.0.1:41794.service: Deactivated successfully. Jul 11 00:27:10.706379 systemd-logind[1524]: Session 19 logged out. Waiting for processes to exit. Jul 11 00:27:10.706747 systemd[1]: session-19.scope: Deactivated successfully. Jul 11 00:27:10.708470 systemd-logind[1524]: Removed session 19. Jul 11 00:27:11.754003 kubelet[2609]: I0711 00:27:11.753955 2609 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:27:15.721190 systemd[1]: Started sshd@19-10.0.0.118:22-10.0.0.1:33658.service - OpenSSH per-connection server daemon (10.0.0.1:33658). Jul 11 00:27:15.756386 sshd[6247]: Accepted publickey for core from 10.0.0.1 port 33658 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:27:15.758166 sshd[6247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:27:15.762138 systemd-logind[1524]: New session 20 of user core. Jul 11 00:27:15.776249 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 11 00:27:15.977375 sshd[6247]: pam_unix(sshd:session): session closed for user core Jul 11 00:27:15.982425 systemd[1]: sshd@19-10.0.0.118:22-10.0.0.1:33658.service: Deactivated successfully. Jul 11 00:27:15.985157 systemd[1]: session-20.scope: Deactivated successfully. Jul 11 00:27:15.986515 systemd-logind[1524]: Session 20 logged out. Waiting for processes to exit. Jul 11 00:27:15.987544 systemd-logind[1524]: Removed session 20.