Jul 10 00:36:40.898559 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 10 00:36:40.898581 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jul 9 22:54:34 -00 2025 Jul 10 00:36:40.898590 kernel: KASLR enabled Jul 10 00:36:40.898596 kernel: efi: EFI v2.7 by EDK II Jul 10 00:36:40.898602 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 10 00:36:40.898608 kernel: random: crng init done Jul 10 00:36:40.898615 kernel: ACPI: Early table checksum verification disabled Jul 10 00:36:40.898620 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 10 00:36:40.898626 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 10 00:36:40.898634 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:36:40.898640 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:36:40.898646 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:36:40.898652 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:36:40.898658 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:36:40.898669 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:36:40.898678 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:36:40.898684 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:36:40.898691 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:36:40.898697 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 10 00:36:40.898703 kernel: NUMA: Failed to initialise from firmware Jul 10 00:36:40.898720 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:36:40.898726 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Jul 10 00:36:40.898733 kernel: Zone ranges: Jul 10 00:36:40.898739 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:36:40.898745 kernel: DMA32 empty Jul 10 00:36:40.898753 kernel: Normal empty Jul 10 00:36:40.898760 kernel: Movable zone start for each node Jul 10 00:36:40.898766 kernel: Early memory node ranges Jul 10 00:36:40.898773 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 10 00:36:40.898779 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 10 00:36:40.898785 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 10 00:36:40.898791 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 10 00:36:40.898798 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 10 00:36:40.898804 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 10 00:36:40.898810 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 10 00:36:40.898816 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:36:40.898823 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 10 00:36:40.898830 kernel: psci: probing for conduit method from ACPI. Jul 10 00:36:40.898837 kernel: psci: PSCIv1.1 detected in firmware. Jul 10 00:36:40.898843 kernel: psci: Using standard PSCI v0.2 function IDs Jul 10 00:36:40.898852 kernel: psci: Trusted OS migration not required Jul 10 00:36:40.898859 kernel: psci: SMC Calling Convention v1.1 Jul 10 00:36:40.898866 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 10 00:36:40.898874 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 10 00:36:40.898881 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 10 00:36:40.898887 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 10 00:36:40.898894 kernel: Detected PIPT I-cache on CPU0 Jul 10 00:36:40.898901 kernel: CPU features: detected: GIC system register CPU interface Jul 10 00:36:40.898907 kernel: CPU features: detected: Hardware dirty bit management Jul 10 00:36:40.898914 kernel: CPU features: detected: Spectre-v4 Jul 10 00:36:40.898921 kernel: CPU features: detected: Spectre-BHB Jul 10 00:36:40.898927 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 10 00:36:40.898934 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 10 00:36:40.898948 kernel: CPU features: detected: ARM erratum 1418040 Jul 10 00:36:40.898955 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 10 00:36:40.898962 kernel: alternatives: applying boot alternatives Jul 10 00:36:40.898970 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2fac453dfc912247542078b0007b053dde4e47cc6ef808508492c36b6016a78f Jul 10 00:36:40.898977 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 00:36:40.898984 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 00:36:40.898991 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 00:36:40.898997 kernel: Fallback order for Node 0: 0 Jul 10 00:36:40.899004 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 10 00:36:40.899011 kernel: Policy zone: DMA Jul 10 00:36:40.899017 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 00:36:40.899025 kernel: software IO TLB: area num 4. Jul 10 00:36:40.899032 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 10 00:36:40.899039 kernel: Memory: 2386400K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185888K reserved, 0K cma-reserved) Jul 10 00:36:40.899046 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 10 00:36:40.899053 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 00:36:40.899060 kernel: rcu: RCU event tracing is enabled. Jul 10 00:36:40.899067 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 10 00:36:40.899074 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 00:36:40.899081 kernel: Tracing variant of Tasks RCU enabled. Jul 10 00:36:40.899088 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 00:36:40.899094 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 10 00:36:40.899101 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 10 00:36:40.899109 kernel: GICv3: 256 SPIs implemented Jul 10 00:36:40.899116 kernel: GICv3: 0 Extended SPIs implemented Jul 10 00:36:40.899122 kernel: Root IRQ handler: gic_handle_irq Jul 10 00:36:40.899129 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 10 00:36:40.899138 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 10 00:36:40.899145 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 10 00:36:40.899152 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 10 00:36:40.899159 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 10 00:36:40.899166 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 10 00:36:40.899173 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 10 00:36:40.899179 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 10 00:36:40.899188 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:36:40.899194 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 10 00:36:40.899201 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 10 00:36:40.899208 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 10 00:36:40.899215 kernel: arm-pv: using stolen time PV Jul 10 00:36:40.899222 kernel: Console: colour dummy device 80x25 Jul 10 00:36:40.899229 kernel: ACPI: Core revision 20230628 Jul 10 00:36:40.899236 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 10 00:36:40.899243 kernel: pid_max: default: 32768 minimum: 301 Jul 10 00:36:40.899250 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 10 00:36:40.899258 kernel: landlock: Up and running. Jul 10 00:36:40.899265 kernel: SELinux: Initializing. Jul 10 00:36:40.899272 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:36:40.899279 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:36:40.899286 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 00:36:40.899293 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 00:36:40.899300 kernel: rcu: Hierarchical SRCU implementation. Jul 10 00:36:40.899307 kernel: rcu: Max phase no-delay instances is 400. Jul 10 00:36:40.899314 kernel: Platform MSI: ITS@0x8080000 domain created Jul 10 00:36:40.899322 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 10 00:36:40.899329 kernel: Remapping and enabling EFI services. Jul 10 00:36:40.899335 kernel: smp: Bringing up secondary CPUs ... Jul 10 00:36:40.899342 kernel: Detected PIPT I-cache on CPU1 Jul 10 00:36:40.899349 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 10 00:36:40.899356 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 10 00:36:40.899363 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:36:40.899373 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 10 00:36:40.899380 kernel: Detected PIPT I-cache on CPU2 Jul 10 00:36:40.899387 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 10 00:36:40.899395 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 10 00:36:40.899402 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:36:40.899416 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 10 00:36:40.899425 kernel: Detected PIPT I-cache on CPU3 Jul 10 00:36:40.899432 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 10 00:36:40.899439 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 10 00:36:40.899447 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:36:40.899453 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 10 00:36:40.899461 kernel: smp: Brought up 1 node, 4 CPUs Jul 10 00:36:40.899469 kernel: SMP: Total of 4 processors activated. Jul 10 00:36:40.899477 kernel: CPU features: detected: 32-bit EL0 Support Jul 10 00:36:40.899484 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 10 00:36:40.899491 kernel: CPU features: detected: Common not Private translations Jul 10 00:36:40.899498 kernel: CPU features: detected: CRC32 instructions Jul 10 00:36:40.899506 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 10 00:36:40.899513 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 10 00:36:40.899531 kernel: CPU features: detected: LSE atomic instructions Jul 10 00:36:40.899541 kernel: CPU features: detected: Privileged Access Never Jul 10 00:36:40.899560 kernel: CPU features: detected: RAS Extension Support Jul 10 00:36:40.899568 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 10 00:36:40.899576 kernel: CPU: All CPU(s) started at EL1 Jul 10 00:36:40.899583 kernel: alternatives: applying system-wide alternatives Jul 10 00:36:40.899590 kernel: devtmpfs: initialized Jul 10 00:36:40.899597 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 00:36:40.899605 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 10 00:36:40.899612 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 00:36:40.899621 kernel: SMBIOS 3.0.0 present. Jul 10 00:36:40.899628 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 10 00:36:40.899635 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 00:36:40.899643 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 10 00:36:40.899650 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 10 00:36:40.899657 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 10 00:36:40.899664 kernel: audit: initializing netlink subsys (disabled) Jul 10 00:36:40.899672 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jul 10 00:36:40.899679 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 00:36:40.899687 kernel: cpuidle: using governor menu Jul 10 00:36:40.899695 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 10 00:36:40.899702 kernel: ASID allocator initialised with 32768 entries Jul 10 00:36:40.899718 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 00:36:40.899726 kernel: Serial: AMBA PL011 UART driver Jul 10 00:36:40.899735 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 10 00:36:40.899743 kernel: Modules: 0 pages in range for non-PLT usage Jul 10 00:36:40.899750 kernel: Modules: 509008 pages in range for PLT usage Jul 10 00:36:40.899757 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 00:36:40.899767 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 10 00:36:40.899774 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 10 00:36:40.899781 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 10 00:36:40.899788 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 00:36:40.899796 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 00:36:40.899803 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 10 00:36:40.899810 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 10 00:36:40.899817 kernel: ACPI: Added _OSI(Module Device) Jul 10 00:36:40.899824 kernel: ACPI: Added _OSI(Processor Device) Jul 10 00:36:40.899833 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 00:36:40.899840 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 00:36:40.899847 kernel: ACPI: Interpreter enabled Jul 10 00:36:40.899854 kernel: ACPI: Using GIC for interrupt routing Jul 10 00:36:40.899861 kernel: ACPI: MCFG table detected, 1 entries Jul 10 00:36:40.899869 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 10 00:36:40.899876 kernel: printk: console [ttyAMA0] enabled Jul 10 00:36:40.899883 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 10 00:36:40.900027 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 10 00:36:40.900104 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 10 00:36:40.900169 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 10 00:36:40.900232 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 10 00:36:40.900294 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 10 00:36:40.900304 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 10 00:36:40.900311 kernel: PCI host bridge to bus 0000:00 Jul 10 00:36:40.900380 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 10 00:36:40.900440 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 10 00:36:40.900497 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 10 00:36:40.900604 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 10 00:36:40.900693 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 10 00:36:40.900783 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 10 00:36:40.900850 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 10 00:36:40.900920 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 10 00:36:40.900991 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 10 00:36:40.901057 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 10 00:36:40.901121 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 10 00:36:40.901185 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 10 00:36:40.901249 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 10 00:36:40.901307 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 10 00:36:40.901368 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 10 00:36:40.901378 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 10 00:36:40.901386 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 10 00:36:40.901393 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 10 00:36:40.901400 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 10 00:36:40.901408 kernel: iommu: Default domain type: Translated Jul 10 00:36:40.901415 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 10 00:36:40.901422 kernel: efivars: Registered efivars operations Jul 10 00:36:40.901431 kernel: vgaarb: loaded Jul 10 00:36:40.901439 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 10 00:36:40.901446 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 00:36:40.901457 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 00:36:40.901465 kernel: pnp: PnP ACPI init Jul 10 00:36:40.901553 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 10 00:36:40.901565 kernel: pnp: PnP ACPI: found 1 devices Jul 10 00:36:40.901572 kernel: NET: Registered PF_INET protocol family Jul 10 00:36:40.901579 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 00:36:40.901589 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 00:36:40.901597 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 00:36:40.901604 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 00:36:40.901612 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 10 00:36:40.901619 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 00:36:40.901626 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:36:40.901634 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:36:40.901641 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 00:36:40.901650 kernel: PCI: CLS 0 bytes, default 64 Jul 10 00:36:40.901657 kernel: kvm [1]: HYP mode not available Jul 10 00:36:40.901665 kernel: Initialise system trusted keyrings Jul 10 00:36:40.901672 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 00:36:40.901679 kernel: Key type asymmetric registered Jul 10 00:36:40.901686 kernel: Asymmetric key parser 'x509' registered Jul 10 00:36:40.901694 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 10 00:36:40.901701 kernel: io scheduler mq-deadline registered Jul 10 00:36:40.901719 kernel: io scheduler kyber registered Jul 10 00:36:40.901726 kernel: io scheduler bfq registered Jul 10 00:36:40.901736 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 10 00:36:40.901743 kernel: ACPI: button: Power Button [PWRB] Jul 10 00:36:40.901750 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 10 00:36:40.901828 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 10 00:36:40.901839 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 00:36:40.901846 kernel: thunder_xcv, ver 1.0 Jul 10 00:36:40.901853 kernel: thunder_bgx, ver 1.0 Jul 10 00:36:40.901864 kernel: nicpf, ver 1.0 Jul 10 00:36:40.901872 kernel: nicvf, ver 1.0 Jul 10 00:36:40.901956 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 10 00:36:40.902020 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-10T00:36:40 UTC (1752107800) Jul 10 00:36:40.902030 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 10 00:36:40.902037 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 10 00:36:40.902045 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 10 00:36:40.902052 kernel: watchdog: Hard watchdog permanently disabled Jul 10 00:36:40.902059 kernel: NET: Registered PF_INET6 protocol family Jul 10 00:36:40.902066 kernel: Segment Routing with IPv6 Jul 10 00:36:40.902076 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 00:36:40.902083 kernel: NET: Registered PF_PACKET protocol family Jul 10 00:36:40.902090 kernel: Key type dns_resolver registered Jul 10 00:36:40.902097 kernel: registered taskstats version 1 Jul 10 00:36:40.902105 kernel: Loading compiled-in X.509 certificates Jul 10 00:36:40.902112 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 9cbc45ab00feb4acb0fa362a962909c99fb6ef52' Jul 10 00:36:40.902119 kernel: Key type .fscrypt registered Jul 10 00:36:40.902126 kernel: Key type fscrypt-provisioning registered Jul 10 00:36:40.902133 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 00:36:40.902142 kernel: ima: Allocated hash algorithm: sha1 Jul 10 00:36:40.902149 kernel: ima: No architecture policies found Jul 10 00:36:40.902157 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 10 00:36:40.902164 kernel: clk: Disabling unused clocks Jul 10 00:36:40.902171 kernel: Freeing unused kernel memory: 39424K Jul 10 00:36:40.902178 kernel: Run /init as init process Jul 10 00:36:40.902185 kernel: with arguments: Jul 10 00:36:40.902193 kernel: /init Jul 10 00:36:40.902200 kernel: with environment: Jul 10 00:36:40.902208 kernel: HOME=/ Jul 10 00:36:40.902215 kernel: TERM=linux Jul 10 00:36:40.902222 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 00:36:40.902231 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 10 00:36:40.902241 systemd[1]: Detected virtualization kvm. Jul 10 00:36:40.902249 systemd[1]: Detected architecture arm64. Jul 10 00:36:40.902256 systemd[1]: Running in initrd. Jul 10 00:36:40.902269 systemd[1]: No hostname configured, using default hostname. Jul 10 00:36:40.902277 systemd[1]: Hostname set to . Jul 10 00:36:40.902286 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:36:40.902293 systemd[1]: Queued start job for default target initrd.target. Jul 10 00:36:40.902302 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:36:40.902310 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:36:40.902318 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 00:36:40.902326 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:36:40.902336 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 00:36:40.902344 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 00:36:40.902354 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 00:36:40.902362 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 00:36:40.902370 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:36:40.902378 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:36:40.902386 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:36:40.902395 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:36:40.902403 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:36:40.902411 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:36:40.902419 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:36:40.902427 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:36:40.902435 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 00:36:40.902443 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 10 00:36:40.902451 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:36:40.902459 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:36:40.902468 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:36:40.902477 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:36:40.902486 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 00:36:40.902494 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:36:40.902504 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 00:36:40.902514 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 00:36:40.902533 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:36:40.902541 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:36:40.902550 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:36:40.902558 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 00:36:40.902566 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:36:40.902574 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 00:36:40.902583 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 00:36:40.902592 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:36:40.902625 systemd-journald[238]: Collecting audit messages is disabled. Jul 10 00:36:40.902646 systemd-journald[238]: Journal started Jul 10 00:36:40.902669 systemd-journald[238]: Runtime Journal (/run/log/journal/5b019331feed41a4a1d4f0bbbf54e2c8) is 5.9M, max 47.3M, 41.4M free. Jul 10 00:36:40.905934 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 00:36:40.893538 systemd-modules-load[239]: Inserted module 'overlay' Jul 10 00:36:40.908335 systemd-modules-load[239]: Inserted module 'br_netfilter' Jul 10 00:36:40.909929 kernel: Bridge firewalling registered Jul 10 00:36:40.909947 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:36:40.911533 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:36:40.912584 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:36:40.913729 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:36:40.917978 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:36:40.920197 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:36:40.922198 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:36:40.929800 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:36:40.932673 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:36:40.933786 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:36:40.937569 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:36:40.946648 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 00:36:40.948549 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:36:40.957235 dracut-cmdline[275]: dracut-dracut-053 Jul 10 00:36:40.959693 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2fac453dfc912247542078b0007b053dde4e47cc6ef808508492c36b6016a78f Jul 10 00:36:40.976755 systemd-resolved[277]: Positive Trust Anchors: Jul 10 00:36:40.976771 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:36:40.976803 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:36:40.981634 systemd-resolved[277]: Defaulting to hostname 'linux'. Jul 10 00:36:40.982795 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:36:40.983977 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:36:41.035563 kernel: SCSI subsystem initialized Jul 10 00:36:41.041538 kernel: Loading iSCSI transport class v2.0-870. Jul 10 00:36:41.051560 kernel: iscsi: registered transport (tcp) Jul 10 00:36:41.064540 kernel: iscsi: registered transport (qla4xxx) Jul 10 00:36:41.064592 kernel: QLogic iSCSI HBA Driver Jul 10 00:36:41.113174 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 00:36:41.124716 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 00:36:41.144107 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 00:36:41.144170 kernel: device-mapper: uevent: version 1.0.3 Jul 10 00:36:41.145533 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 10 00:36:41.196550 kernel: raid6: neonx8 gen() 15585 MB/s Jul 10 00:36:41.213546 kernel: raid6: neonx4 gen() 15616 MB/s Jul 10 00:36:41.230556 kernel: raid6: neonx2 gen() 13166 MB/s Jul 10 00:36:41.247539 kernel: raid6: neonx1 gen() 10478 MB/s Jul 10 00:36:41.264556 kernel: raid6: int64x8 gen() 6962 MB/s Jul 10 00:36:41.281543 kernel: raid6: int64x4 gen() 7343 MB/s Jul 10 00:36:41.298542 kernel: raid6: int64x2 gen() 6118 MB/s Jul 10 00:36:41.315547 kernel: raid6: int64x1 gen() 5006 MB/s Jul 10 00:36:41.315578 kernel: raid6: using algorithm neonx4 gen() 15616 MB/s Jul 10 00:36:41.332579 kernel: raid6: .... xor() 12437 MB/s, rmw enabled Jul 10 00:36:41.332615 kernel: raid6: using neon recovery algorithm Jul 10 00:36:41.337667 kernel: xor: measuring software checksum speed Jul 10 00:36:41.337685 kernel: 8regs : 19769 MB/sec Jul 10 00:36:41.338709 kernel: 32regs : 19231 MB/sec Jul 10 00:36:41.338722 kernel: arm64_neon : 27043 MB/sec Jul 10 00:36:41.338732 kernel: xor: using function: arm64_neon (27043 MB/sec) Jul 10 00:36:41.390557 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 00:36:41.400984 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:36:41.408687 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:36:41.421500 systemd-udevd[461]: Using default interface naming scheme 'v255'. Jul 10 00:36:41.424664 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:36:41.427020 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 00:36:41.441700 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Jul 10 00:36:41.467456 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:36:41.476679 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:36:41.516039 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:36:41.521691 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 00:36:41.534627 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 00:36:41.535848 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:36:41.537031 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:36:41.538559 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:36:41.546686 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 00:36:41.557501 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:36:41.559605 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 10 00:36:41.560958 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 10 00:36:41.564691 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 00:36:41.564719 kernel: GPT:9289727 != 19775487 Jul 10 00:36:41.564733 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 00:36:41.564743 kernel: GPT:9289727 != 19775487 Jul 10 00:36:41.565602 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 00:36:41.565859 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:36:41.571071 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:36:41.565971 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:36:41.572811 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:36:41.573825 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:36:41.574166 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:36:41.575784 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:36:41.586867 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:36:41.589545 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (522) Jul 10 00:36:41.594549 kernel: BTRFS: device fsid e18a5201-bc0c-484b-ba1b-be3c0a720c32 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (517) Jul 10 00:36:41.595789 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:36:41.603863 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 10 00:36:41.611537 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 10 00:36:41.615050 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 10 00:36:41.615941 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 10 00:36:41.620807 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 00:36:41.633670 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 00:36:41.635125 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:36:41.639296 disk-uuid[549]: Primary Header is updated. Jul 10 00:36:41.639296 disk-uuid[549]: Secondary Entries is updated. Jul 10 00:36:41.639296 disk-uuid[549]: Secondary Header is updated. Jul 10 00:36:41.643548 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:36:41.655581 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:36:42.657559 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:36:42.658289 disk-uuid[550]: The operation has completed successfully. Jul 10 00:36:42.683473 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 00:36:42.683607 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 00:36:42.700720 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 00:36:42.703832 sh[573]: Success Jul 10 00:36:42.715921 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 10 00:36:42.749354 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 00:36:42.760840 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 00:36:42.763610 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 00:36:42.771842 kernel: BTRFS info (device dm-0): first mount of filesystem e18a5201-bc0c-484b-ba1b-be3c0a720c32 Jul 10 00:36:42.771883 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:36:42.771894 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 10 00:36:42.772590 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 10 00:36:42.773580 kernel: BTRFS info (device dm-0): using free space tree Jul 10 00:36:42.777119 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 00:36:42.778315 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 10 00:36:42.786732 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 00:36:42.788135 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 00:36:42.796042 kernel: BTRFS info (device vda6): first mount of filesystem 8ce7827a-be35-4e5a-9c5c-f9bfd6370ac0 Jul 10 00:36:42.796093 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:36:42.796110 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:36:42.798532 kernel: BTRFS info (device vda6): auto enabling async discard Jul 10 00:36:42.806323 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 10 00:36:42.807639 kernel: BTRFS info (device vda6): last unmount of filesystem 8ce7827a-be35-4e5a-9c5c-f9bfd6370ac0 Jul 10 00:36:42.812403 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 00:36:42.819710 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 00:36:42.894264 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:36:42.909707 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:36:42.938921 systemd-networkd[760]: lo: Link UP Jul 10 00:36:42.938930 systemd-networkd[760]: lo: Gained carrier Jul 10 00:36:42.940921 systemd-networkd[760]: Enumeration completed Jul 10 00:36:42.941021 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:36:42.941918 systemd[1]: Reached target network.target - Network. Jul 10 00:36:42.944299 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:36:42.944305 systemd-networkd[760]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:36:42.945117 systemd-networkd[760]: eth0: Link UP Jul 10 00:36:42.945120 systemd-networkd[760]: eth0: Gained carrier Jul 10 00:36:42.945127 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:36:42.952841 ignition[661]: Ignition 2.19.0 Jul 10 00:36:42.952854 ignition[661]: Stage: fetch-offline Jul 10 00:36:42.952897 ignition[661]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:36:42.952912 ignition[661]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:36:42.953116 ignition[661]: parsed url from cmdline: "" Jul 10 00:36:42.953119 ignition[661]: no config URL provided Jul 10 00:36:42.953125 ignition[661]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:36:42.953133 ignition[661]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:36:42.953158 ignition[661]: op(1): [started] loading QEMU firmware config module Jul 10 00:36:42.953162 ignition[661]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 10 00:36:42.962107 ignition[661]: op(1): [finished] loading QEMU firmware config module Jul 10 00:36:42.963605 systemd-networkd[760]: eth0: DHCPv4 address 10.0.0.112/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:36:43.001454 ignition[661]: parsing config with SHA512: f906f002fee2bf4d76ddaeb3e7ccf697ae5ecec7282ca19112922df0acf0f2311f0612eeb516beba0c528bd7707b482246712717eeada46470be0133df469727 Jul 10 00:36:43.006237 unknown[661]: fetched base config from "system" Jul 10 00:36:43.006247 unknown[661]: fetched user config from "qemu" Jul 10 00:36:43.008675 ignition[661]: fetch-offline: fetch-offline passed Jul 10 00:36:43.008776 ignition[661]: Ignition finished successfully Jul 10 00:36:43.011033 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:36:43.012076 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 10 00:36:43.020698 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 00:36:43.032723 ignition[772]: Ignition 2.19.0 Jul 10 00:36:43.032734 ignition[772]: Stage: kargs Jul 10 00:36:43.032907 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:36:43.032918 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:36:43.033835 ignition[772]: kargs: kargs passed Jul 10 00:36:43.033885 ignition[772]: Ignition finished successfully Jul 10 00:36:43.036586 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 00:36:43.046708 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 00:36:43.056387 ignition[780]: Ignition 2.19.0 Jul 10 00:36:43.056398 ignition[780]: Stage: disks Jul 10 00:36:43.056607 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:36:43.056618 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:36:43.057618 ignition[780]: disks: disks passed Jul 10 00:36:43.057665 ignition[780]: Ignition finished successfully Jul 10 00:36:43.060601 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 00:36:43.062495 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 00:36:43.064212 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 00:36:43.065152 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:36:43.065876 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:36:43.067311 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:36:43.075656 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 00:36:43.086365 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 10 00:36:43.091345 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 00:36:43.102687 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 00:36:43.164543 kernel: EXT4-fs (vda9): mounted filesystem c566fdd5-af6f-4008-858c-a2aed765f9b4 r/w with ordered data mode. Quota mode: none. Jul 10 00:36:43.164706 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 00:36:43.165956 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 00:36:43.176613 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:36:43.178194 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 00:36:43.179066 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 10 00:36:43.179164 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 00:36:43.179238 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:36:43.185656 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (800) Jul 10 00:36:43.185676 kernel: BTRFS info (device vda6): first mount of filesystem 8ce7827a-be35-4e5a-9c5c-f9bfd6370ac0 Jul 10 00:36:43.187040 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:36:43.187070 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:36:43.188190 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 00:36:43.190727 kernel: BTRFS info (device vda6): auto enabling async discard Jul 10 00:36:43.196716 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 00:36:43.198419 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:36:43.245979 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 00:36:43.250322 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Jul 10 00:36:43.254787 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 00:36:43.258574 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 00:36:43.329493 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 00:36:43.341654 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 00:36:43.343004 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 00:36:43.347534 kernel: BTRFS info (device vda6): last unmount of filesystem 8ce7827a-be35-4e5a-9c5c-f9bfd6370ac0 Jul 10 00:36:43.363152 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 00:36:43.365758 ignition[913]: INFO : Ignition 2.19.0 Jul 10 00:36:43.365758 ignition[913]: INFO : Stage: mount Jul 10 00:36:43.367002 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:36:43.367002 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:36:43.367002 ignition[913]: INFO : mount: mount passed Jul 10 00:36:43.367002 ignition[913]: INFO : Ignition finished successfully Jul 10 00:36:43.367941 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 00:36:43.377625 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 00:36:43.771374 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 00:36:43.783771 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:36:43.788532 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (926) Jul 10 00:36:43.790870 kernel: BTRFS info (device vda6): first mount of filesystem 8ce7827a-be35-4e5a-9c5c-f9bfd6370ac0 Jul 10 00:36:43.790886 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:36:43.790896 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:36:43.792530 kernel: BTRFS info (device vda6): auto enabling async discard Jul 10 00:36:43.793746 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:36:43.808803 ignition[943]: INFO : Ignition 2.19.0 Jul 10 00:36:43.808803 ignition[943]: INFO : Stage: files Jul 10 00:36:43.809970 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:36:43.809970 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:36:43.809970 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Jul 10 00:36:43.812394 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 00:36:43.812394 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 00:36:43.812394 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 00:36:43.812394 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 00:36:43.816221 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 00:36:43.816221 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 10 00:36:43.816221 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 10 00:36:43.812614 unknown[943]: wrote ssh authorized keys file for user: core Jul 10 00:36:43.916653 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 00:36:44.102909 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 10 00:36:44.102909 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 10 00:36:44.105671 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 00:36:44.105671 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:36:44.105671 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:36:44.105671 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:36:44.105671 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:36:44.105671 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:36:44.105671 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:36:44.105671 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:36:44.105671 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:36:44.105671 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 10 00:36:44.105671 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 10 00:36:44.105671 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 10 00:36:44.105671 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 10 00:36:44.561361 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 10 00:36:44.624636 systemd-networkd[760]: eth0: Gained IPv6LL Jul 10 00:36:45.061048 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 10 00:36:45.061048 ignition[943]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 10 00:36:45.063768 ignition[943]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:36:45.063768 ignition[943]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:36:45.063768 ignition[943]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 10 00:36:45.063768 ignition[943]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 10 00:36:45.063768 ignition[943]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:36:45.063768 ignition[943]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:36:45.063768 ignition[943]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 10 00:36:45.063768 ignition[943]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 10 00:36:45.085936 ignition[943]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:36:45.089905 ignition[943]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:36:45.092201 ignition[943]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 10 00:36:45.092201 ignition[943]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 10 00:36:45.092201 ignition[943]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 00:36:45.092201 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:36:45.092201 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:36:45.092201 ignition[943]: INFO : files: files passed Jul 10 00:36:45.092201 ignition[943]: INFO : Ignition finished successfully Jul 10 00:36:45.092493 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 00:36:45.104735 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 00:36:45.106312 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 00:36:45.110178 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 00:36:45.110967 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 00:36:45.113784 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Jul 10 00:36:45.115917 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:36:45.115917 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:36:45.118161 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:36:45.117603 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:36:45.119756 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 00:36:45.134702 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 00:36:45.153286 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 00:36:45.153390 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 00:36:45.155929 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 00:36:45.157116 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 00:36:45.158367 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 00:36:45.159101 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 00:36:45.173950 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:36:45.189712 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 00:36:45.197703 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:36:45.198538 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:36:45.200058 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 00:36:45.201389 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 00:36:45.201451 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:36:45.203321 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 00:36:45.204719 systemd[1]: Stopped target basic.target - Basic System. Jul 10 00:36:45.205898 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 00:36:45.207121 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:36:45.208489 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 00:36:45.209924 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 00:36:45.211203 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:36:45.212579 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 00:36:45.213962 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 00:36:45.215168 systemd[1]: Stopped target swap.target - Swaps. Jul 10 00:36:45.216259 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 00:36:45.216322 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:36:45.218167 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:36:45.219504 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:36:45.220952 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 00:36:45.222405 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:36:45.223363 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 00:36:45.223420 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 00:36:45.225547 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 00:36:45.225593 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:36:45.226997 systemd[1]: Stopped target paths.target - Path Units. Jul 10 00:36:45.228148 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 00:36:45.231591 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:36:45.232531 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 00:36:45.234071 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 00:36:45.235219 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 00:36:45.235260 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:36:45.236369 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 00:36:45.236399 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:36:45.237536 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 00:36:45.237578 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:36:45.239028 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 00:36:45.239068 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 00:36:45.250609 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 00:36:45.251335 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 00:36:45.251387 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:36:45.254434 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 00:36:45.255924 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 00:36:45.256758 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:36:45.258550 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 00:36:45.259279 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:36:45.260845 ignition[997]: INFO : Ignition 2.19.0 Jul 10 00:36:45.260845 ignition[997]: INFO : Stage: umount Jul 10 00:36:45.260845 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:36:45.260845 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:36:45.263583 ignition[997]: INFO : umount: umount passed Jul 10 00:36:45.263583 ignition[997]: INFO : Ignition finished successfully Jul 10 00:36:45.264053 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 00:36:45.264135 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 00:36:45.265922 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 00:36:45.266384 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 00:36:45.266463 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 00:36:45.268841 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 00:36:45.268935 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 00:36:45.271206 systemd[1]: Stopped target network.target - Network. Jul 10 00:36:45.272266 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 00:36:45.272327 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 00:36:45.273623 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 00:36:45.273666 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 00:36:45.274787 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 00:36:45.274826 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 00:36:45.276030 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 00:36:45.276070 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 00:36:45.277347 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 00:36:45.277399 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 00:36:45.279162 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 00:36:45.280359 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 00:36:45.285570 systemd-networkd[760]: eth0: DHCPv6 lease lost Jul 10 00:36:45.287254 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 00:36:45.287385 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 00:36:45.290049 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 00:36:45.291043 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 00:36:45.293081 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 00:36:45.293129 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:36:45.304607 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 00:36:45.305243 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 00:36:45.305292 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:36:45.306769 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:36:45.306808 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:36:45.308043 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 00:36:45.308078 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 00:36:45.309501 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 00:36:45.309557 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:36:45.311122 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:36:45.320397 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 00:36:45.320555 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 00:36:45.328217 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 00:36:45.328375 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:36:45.330076 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 00:36:45.330117 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 00:36:45.331251 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 00:36:45.331281 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:36:45.332554 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 00:36:45.332598 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:36:45.334620 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 00:36:45.334663 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 00:36:45.336516 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:36:45.336569 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:36:45.346671 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 00:36:45.347426 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 00:36:45.347481 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:36:45.349101 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:36:45.349143 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:36:45.354063 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 00:36:45.354868 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 00:36:45.355881 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 00:36:45.357866 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 00:36:45.366892 systemd[1]: Switching root. Jul 10 00:36:45.396438 systemd-journald[238]: Journal stopped Jul 10 00:36:46.128918 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jul 10 00:36:46.128991 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 00:36:46.129005 kernel: SELinux: policy capability open_perms=1 Jul 10 00:36:46.129015 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 00:36:46.129028 kernel: SELinux: policy capability always_check_network=0 Jul 10 00:36:46.129038 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 00:36:46.129060 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 00:36:46.129069 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 00:36:46.129079 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 00:36:46.129089 kernel: audit: type=1403 audit(1752107805.572:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 00:36:46.129099 systemd[1]: Successfully loaded SELinux policy in 33.783ms. Jul 10 00:36:46.129120 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.532ms. Jul 10 00:36:46.129132 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 10 00:36:46.129145 systemd[1]: Detected virtualization kvm. Jul 10 00:36:46.129156 systemd[1]: Detected architecture arm64. Jul 10 00:36:46.129169 systemd[1]: Detected first boot. Jul 10 00:36:46.129180 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:36:46.129190 zram_generator::config[1042]: No configuration found. Jul 10 00:36:46.129202 systemd[1]: Populated /etc with preset unit settings. Jul 10 00:36:46.129212 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 00:36:46.129222 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 10 00:36:46.129235 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 00:36:46.129246 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 00:36:46.129257 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 00:36:46.129267 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 00:36:46.129278 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 00:36:46.129289 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 00:36:46.129301 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 00:36:46.129311 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 00:36:46.129322 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 00:36:46.129335 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:36:46.129346 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:36:46.129356 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 00:36:46.129367 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 00:36:46.129377 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 00:36:46.129388 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:36:46.129399 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 10 00:36:46.129410 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:36:46.129421 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 10 00:36:46.129433 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 10 00:36:46.129444 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 10 00:36:46.129455 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 00:36:46.129465 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:36:46.129476 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:36:46.129487 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:36:46.129498 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:36:46.129510 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 00:36:46.129539 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 00:36:46.129553 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:36:46.129564 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:36:46.129575 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:36:46.129585 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 00:36:46.129608 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 00:36:46.129619 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 00:36:46.129631 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 00:36:46.129642 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 00:36:46.129655 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 00:36:46.129666 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 00:36:46.129685 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 00:36:46.129696 systemd[1]: Reached target machines.target - Containers. Jul 10 00:36:46.129706 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 00:36:46.129717 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:36:46.129728 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:36:46.129739 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 00:36:46.129751 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:36:46.129762 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:36:46.129774 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:36:46.129784 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 00:36:46.129795 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:36:46.129808 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 00:36:46.129819 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 00:36:46.129829 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 10 00:36:46.129842 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 00:36:46.129853 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 00:36:46.129863 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:36:46.129874 kernel: loop: module loaded Jul 10 00:36:46.129884 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:36:46.129895 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:36:46.129905 kernel: fuse: init (API version 7.39) Jul 10 00:36:46.129915 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 00:36:46.129926 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:36:46.129937 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 00:36:46.129950 systemd[1]: Stopped verity-setup.service. Jul 10 00:36:46.129961 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 00:36:46.129972 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 00:36:46.129983 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 00:36:46.129993 kernel: ACPI: bus type drm_connector registered Jul 10 00:36:46.130003 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 00:36:46.130013 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 00:36:46.130026 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 00:36:46.130037 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:36:46.130047 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 00:36:46.130058 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 00:36:46.130094 systemd-journald[1110]: Collecting audit messages is disabled. Jul 10 00:36:46.130118 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:36:46.130129 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:36:46.130140 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:36:46.130151 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:36:46.130162 systemd-journald[1110]: Journal started Jul 10 00:36:46.130183 systemd-journald[1110]: Runtime Journal (/run/log/journal/5b019331feed41a4a1d4f0bbbf54e2c8) is 5.9M, max 47.3M, 41.4M free. Jul 10 00:36:45.935556 systemd[1]: Queued start job for default target multi-user.target. Jul 10 00:36:45.955387 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 10 00:36:45.955780 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 00:36:46.132727 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:36:46.133376 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:36:46.134596 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:36:46.135824 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 00:36:46.136885 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 00:36:46.137030 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 00:36:46.138062 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:36:46.138201 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:36:46.139276 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:36:46.140413 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:36:46.141770 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 00:36:46.153758 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:36:46.166645 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 00:36:46.168446 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 00:36:46.169312 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 00:36:46.169349 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:36:46.170998 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 10 00:36:46.172800 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 00:36:46.176542 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 00:36:46.177663 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:36:46.179200 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 00:36:46.180961 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 00:36:46.181906 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:36:46.183690 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 00:36:46.184592 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:36:46.185703 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:36:46.189871 systemd-journald[1110]: Time spent on flushing to /var/log/journal/5b019331feed41a4a1d4f0bbbf54e2c8 is 26.148ms for 851 entries. Jul 10 00:36:46.189871 systemd-journald[1110]: System Journal (/var/log/journal/5b019331feed41a4a1d4f0bbbf54e2c8) is 8.0M, max 195.6M, 187.6M free. Jul 10 00:36:46.223028 systemd-journald[1110]: Received client request to flush runtime journal. Jul 10 00:36:46.223073 kernel: loop0: detected capacity change from 0 to 114328 Jul 10 00:36:46.189864 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 00:36:46.193313 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 00:36:46.198926 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:36:46.200069 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 00:36:46.201055 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 00:36:46.202161 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 00:36:46.216573 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 10 00:36:46.217897 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 00:36:46.219409 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 00:36:46.224785 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 10 00:36:46.227802 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 00:36:46.231541 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 00:36:46.234840 udevadm[1161]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 10 00:36:46.235764 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:36:46.252868 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 00:36:46.257163 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:36:46.262604 kernel: loop1: detected capacity change from 0 to 114432 Jul 10 00:36:46.273873 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 00:36:46.276122 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 10 00:36:46.288229 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Jul 10 00:36:46.288245 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Jul 10 00:36:46.292879 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:36:46.293543 kernel: loop2: detected capacity change from 0 to 203944 Jul 10 00:36:46.334551 kernel: loop3: detected capacity change from 0 to 114328 Jul 10 00:36:46.340535 kernel: loop4: detected capacity change from 0 to 114432 Jul 10 00:36:46.344542 kernel: loop5: detected capacity change from 0 to 203944 Jul 10 00:36:46.347405 (sd-merge)[1179]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 10 00:36:46.347797 (sd-merge)[1179]: Merged extensions into '/usr'. Jul 10 00:36:46.353847 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 00:36:46.353863 systemd[1]: Reloading... Jul 10 00:36:46.403557 zram_generator::config[1205]: No configuration found. Jul 10 00:36:46.463714 ldconfig[1149]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 00:36:46.514278 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:36:46.549706 systemd[1]: Reloading finished in 195 ms. Jul 10 00:36:46.583698 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 00:36:46.586557 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 00:36:46.597763 systemd[1]: Starting ensure-sysext.service... Jul 10 00:36:46.599677 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:36:46.610775 systemd[1]: Reloading requested from client PID 1239 ('systemctl') (unit ensure-sysext.service)... Jul 10 00:36:46.610791 systemd[1]: Reloading... Jul 10 00:36:46.627704 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 00:36:46.628164 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 00:36:46.629829 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 00:36:46.630161 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jul 10 00:36:46.630290 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jul 10 00:36:46.639825 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:36:46.640744 systemd-tmpfiles[1240]: Skipping /boot Jul 10 00:36:46.651639 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:36:46.651938 systemd-tmpfiles[1240]: Skipping /boot Jul 10 00:36:46.656537 zram_generator::config[1267]: No configuration found. Jul 10 00:36:46.741177 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:36:46.776814 systemd[1]: Reloading finished in 165 ms. Jul 10 00:36:46.790702 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 00:36:46.804966 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:36:46.812752 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 10 00:36:46.815184 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 00:36:46.817538 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 00:36:46.822870 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:36:46.836882 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:36:46.841885 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 00:36:46.843621 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 00:36:46.847390 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:36:46.851813 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:36:46.853601 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:36:46.858780 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:36:46.859788 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:36:46.861354 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 00:36:46.864818 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 00:36:46.867144 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:36:46.867294 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:36:46.870466 systemd-udevd[1314]: Using default interface naming scheme 'v255'. Jul 10 00:36:46.876833 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:36:46.878404 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:36:46.879511 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:36:46.881703 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 00:36:46.883371 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:36:46.883559 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:36:46.885031 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 00:36:46.886563 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 00:36:46.888149 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:36:46.890580 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:36:46.892138 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:36:46.892268 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:36:46.893588 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:36:46.894731 augenrules[1333]: No rules Jul 10 00:36:46.896556 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 10 00:36:46.908260 systemd[1]: Finished ensure-sysext.service. Jul 10 00:36:46.910495 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:36:46.923815 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:36:46.928125 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:36:46.931546 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:36:46.935529 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:36:46.936385 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:36:46.938129 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:36:46.944760 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 10 00:36:46.945595 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:36:46.946095 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:36:46.946232 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:36:46.947507 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:36:46.947686 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:36:46.948820 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 00:36:46.950212 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:36:46.950341 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:36:46.951811 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:36:46.952047 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:36:46.966436 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 10 00:36:46.969946 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:36:46.970014 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:36:46.971628 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1355) Jul 10 00:36:47.006048 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 00:36:47.014830 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 00:36:47.038391 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 00:36:47.048357 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 10 00:36:47.049570 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 00:36:47.055564 systemd-networkd[1370]: lo: Link UP Jul 10 00:36:47.055571 systemd-networkd[1370]: lo: Gained carrier Jul 10 00:36:47.056321 systemd-networkd[1370]: Enumeration completed Jul 10 00:36:47.056439 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:36:47.059646 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:36:47.059668 systemd-networkd[1370]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:36:47.063790 systemd-networkd[1370]: eth0: Link UP Jul 10 00:36:47.063800 systemd-networkd[1370]: eth0: Gained carrier Jul 10 00:36:47.063815 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:36:47.066427 systemd-resolved[1307]: Positive Trust Anchors: Jul 10 00:36:47.070352 systemd-resolved[1307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:36:47.070390 systemd-resolved[1307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:36:47.071812 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 00:36:47.079059 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:36:47.080592 systemd-networkd[1370]: eth0: DHCPv4 address 10.0.0.112/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:36:47.083328 systemd-timesyncd[1372]: Network configuration changed, trying to establish connection. Jul 10 00:36:47.085629 systemd-resolved[1307]: Defaulting to hostname 'linux'. Jul 10 00:36:46.650092 systemd-journald[1110]: Time jumped backwards, rotating. Jul 10 00:36:47.087511 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:36:46.643552 systemd-timesyncd[1372]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 10 00:36:46.643596 systemd-timesyncd[1372]: Initial clock synchronization to Thu 2025-07-10 00:36:46.643383 UTC. Jul 10 00:36:46.643640 systemd-resolved[1307]: Clock change detected. Flushing caches. Jul 10 00:36:46.646409 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 10 00:36:46.648271 systemd[1]: Reached target network.target - Network. Jul 10 00:36:46.649304 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:36:46.658606 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 10 00:36:46.678413 lvm[1398]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 00:36:46.693138 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:36:46.715015 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 10 00:36:46.716248 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:36:46.717233 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:36:46.718260 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 00:36:46.719319 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 00:36:46.720589 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 00:36:46.721652 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 00:36:46.722582 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 00:36:46.723448 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 00:36:46.723481 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:36:46.724130 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:36:46.725878 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 00:36:46.728256 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 00:36:46.739494 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 00:36:46.741553 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 10 00:36:46.742990 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 00:36:46.744003 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:36:46.744812 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:36:46.745617 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:36:46.745645 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:36:46.746604 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 00:36:46.748483 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 00:36:46.751512 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 00:36:46.751489 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 00:36:46.756705 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 00:36:46.758646 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 00:36:46.759956 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 00:36:46.764249 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 00:36:46.766964 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 00:36:46.775957 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 00:36:46.777385 jq[1409]: false Jul 10 00:36:46.779177 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 00:36:46.780864 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 00:36:46.781331 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 00:36:46.782566 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 00:36:46.785537 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 00:36:46.789299 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 10 00:36:46.792764 extend-filesystems[1410]: Found loop3 Jul 10 00:36:46.792764 extend-filesystems[1410]: Found loop4 Jul 10 00:36:46.792764 extend-filesystems[1410]: Found loop5 Jul 10 00:36:46.792764 extend-filesystems[1410]: Found vda Jul 10 00:36:46.792764 extend-filesystems[1410]: Found vda1 Jul 10 00:36:46.792764 extend-filesystems[1410]: Found vda2 Jul 10 00:36:46.792764 extend-filesystems[1410]: Found vda3 Jul 10 00:36:46.792764 extend-filesystems[1410]: Found usr Jul 10 00:36:46.792764 extend-filesystems[1410]: Found vda4 Jul 10 00:36:46.792764 extend-filesystems[1410]: Found vda6 Jul 10 00:36:46.792764 extend-filesystems[1410]: Found vda7 Jul 10 00:36:46.792764 extend-filesystems[1410]: Found vda9 Jul 10 00:36:46.792764 extend-filesystems[1410]: Checking size of /dev/vda9 Jul 10 00:36:46.796964 dbus-daemon[1408]: [system] SELinux support is enabled Jul 10 00:36:46.803697 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 00:36:46.806958 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 00:36:46.807139 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 00:36:46.807412 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 00:36:46.807563 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 00:36:46.809930 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 00:36:46.810098 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 00:36:46.820549 jq[1425]: true Jul 10 00:36:46.820873 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 00:36:46.820911 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 00:36:46.823516 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 00:36:46.823544 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 00:36:46.830526 update_engine[1420]: I20250710 00:36:46.830271 1420 main.cc:92] Flatcar Update Engine starting Jul 10 00:36:46.832576 (ntainerd)[1431]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 00:36:46.844280 systemd[1]: Started update-engine.service - Update Engine. Jul 10 00:36:46.845780 extend-filesystems[1410]: Resized partition /dev/vda9 Jul 10 00:36:46.848557 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1352) Jul 10 00:36:46.848583 update_engine[1420]: I20250710 00:36:46.845664 1420 update_check_scheduler.cc:74] Next update check in 3m16s Jul 10 00:36:46.851585 extend-filesystems[1444]: resize2fs 1.47.1 (20-May-2024) Jul 10 00:36:46.857806 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 10 00:36:46.856956 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 00:36:46.859704 systemd-logind[1418]: Watching system buttons on /dev/input/event0 (Power Button) Jul 10 00:36:46.862412 jq[1439]: true Jul 10 00:36:46.862743 systemd-logind[1418]: New seat seat0. Jul 10 00:36:46.863732 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 00:36:46.879303 tar[1430]: linux-arm64/helm Jul 10 00:36:46.879623 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 10 00:36:46.888371 extend-filesystems[1444]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 10 00:36:46.888371 extend-filesystems[1444]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 10 00:36:46.888371 extend-filesystems[1444]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 10 00:36:46.892562 extend-filesystems[1410]: Resized filesystem in /dev/vda9 Jul 10 00:36:46.892903 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 00:36:46.895406 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 00:36:46.962133 bash[1467]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:36:46.963127 locksmithd[1445]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 00:36:46.965037 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 00:36:46.967104 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 10 00:36:47.095904 containerd[1431]: time="2025-07-10T00:36:47.095825173Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 10 00:36:47.104382 sshd_keygen[1426]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 00:36:47.122852 containerd[1431]: time="2025-07-10T00:36:47.122805773Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:36:47.124125 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 00:36:47.127267 containerd[1431]: time="2025-07-10T00:36:47.127233933Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:36:47.127339 containerd[1431]: time="2025-07-10T00:36:47.127326053Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 10 00:36:47.127480 containerd[1431]: time="2025-07-10T00:36:47.127464453Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 10 00:36:47.127692 containerd[1431]: time="2025-07-10T00:36:47.127671213Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 10 00:36:47.127753 containerd[1431]: time="2025-07-10T00:36:47.127740733Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 10 00:36:47.127865 containerd[1431]: time="2025-07-10T00:36:47.127844973Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:36:47.127924 containerd[1431]: time="2025-07-10T00:36:47.127909053Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:36:47.128173 containerd[1431]: time="2025-07-10T00:36:47.128153893Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:36:47.128244 containerd[1431]: time="2025-07-10T00:36:47.128230293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 10 00:36:47.128474 containerd[1431]: time="2025-07-10T00:36:47.128455933Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:36:47.128537 containerd[1431]: time="2025-07-10T00:36:47.128523173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 10 00:36:47.128678 containerd[1431]: time="2025-07-10T00:36:47.128659413Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:36:47.128921 containerd[1431]: time="2025-07-10T00:36:47.128900133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:36:47.129093 containerd[1431]: time="2025-07-10T00:36:47.129074133Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:36:47.129164 containerd[1431]: time="2025-07-10T00:36:47.129149493Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 10 00:36:47.129313 containerd[1431]: time="2025-07-10T00:36:47.129295213Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 10 00:36:47.129430 containerd[1431]: time="2025-07-10T00:36:47.129413853Z" level=info msg="metadata content store policy set" policy=shared Jul 10 00:36:47.133322 containerd[1431]: time="2025-07-10T00:36:47.133297173Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 10 00:36:47.133435 containerd[1431]: time="2025-07-10T00:36:47.133419013Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 10 00:36:47.133507 containerd[1431]: time="2025-07-10T00:36:47.133479133Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 10 00:36:47.133596 containerd[1431]: time="2025-07-10T00:36:47.133579933Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 10 00:36:47.133651 containerd[1431]: time="2025-07-10T00:36:47.133638053Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 10 00:36:47.133814 containerd[1431]: time="2025-07-10T00:36:47.133793293Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 10 00:36:47.134102 containerd[1431]: time="2025-07-10T00:36:47.134081733Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 10 00:36:47.134270 containerd[1431]: time="2025-07-10T00:36:47.134250493Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 10 00:36:47.134334 containerd[1431]: time="2025-07-10T00:36:47.134320933Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 10 00:36:47.134525 containerd[1431]: time="2025-07-10T00:36:47.134409333Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 10 00:36:47.134867 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 00:36:47.135680 containerd[1431]: time="2025-07-10T00:36:47.135600973Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 10 00:36:47.135747 containerd[1431]: time="2025-07-10T00:36:47.135733293Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 10 00:36:47.135800 containerd[1431]: time="2025-07-10T00:36:47.135787533Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 10 00:36:47.135854 containerd[1431]: time="2025-07-10T00:36:47.135842013Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 10 00:36:47.136131 containerd[1431]: time="2025-07-10T00:36:47.135896893Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 10 00:36:47.136270 containerd[1431]: time="2025-07-10T00:36:47.136249893Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 10 00:36:47.136329 containerd[1431]: time="2025-07-10T00:36:47.136317093Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 10 00:36:47.136394 containerd[1431]: time="2025-07-10T00:36:47.136379693Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 10 00:36:47.136463 containerd[1431]: time="2025-07-10T00:36:47.136449133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 10 00:36:47.136532 containerd[1431]: time="2025-07-10T00:36:47.136515933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 10 00:36:47.136644 containerd[1431]: time="2025-07-10T00:36:47.136628493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 10 00:36:47.136714 containerd[1431]: time="2025-07-10T00:36:47.136695893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 10 00:36:47.136765 containerd[1431]: time="2025-07-10T00:36:47.136753293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 10 00:36:47.136824 containerd[1431]: time="2025-07-10T00:36:47.136812213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 10 00:36:47.136876 containerd[1431]: time="2025-07-10T00:36:47.136864773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 10 00:36:47.136936 containerd[1431]: time="2025-07-10T00:36:47.136923013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 10 00:36:47.136988 containerd[1431]: time="2025-07-10T00:36:47.136975853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 10 00:36:47.137040 containerd[1431]: time="2025-07-10T00:36:47.137029333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 10 00:36:47.137091 containerd[1431]: time="2025-07-10T00:36:47.137080413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 10 00:36:47.137426 containerd[1431]: time="2025-07-10T00:36:47.137130013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 10 00:36:47.137426 containerd[1431]: time="2025-07-10T00:36:47.137149933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 10 00:36:47.137426 containerd[1431]: time="2025-07-10T00:36:47.137169093Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 10 00:36:47.137426 containerd[1431]: time="2025-07-10T00:36:47.137190573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 10 00:36:47.137426 containerd[1431]: time="2025-07-10T00:36:47.137202693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 10 00:36:47.137426 containerd[1431]: time="2025-07-10T00:36:47.137214293Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 10 00:36:47.138025 containerd[1431]: time="2025-07-10T00:36:47.137993693Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 10 00:36:47.138107 containerd[1431]: time="2025-07-10T00:36:47.138091773Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 10 00:36:47.138157 containerd[1431]: time="2025-07-10T00:36:47.138143613Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 10 00:36:47.138222 containerd[1431]: time="2025-07-10T00:36:47.138207773Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 10 00:36:47.138269 containerd[1431]: time="2025-07-10T00:36:47.138257053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 10 00:36:47.138398 containerd[1431]: time="2025-07-10T00:36:47.138307453Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 10 00:36:47.138398 containerd[1431]: time="2025-07-10T00:36:47.138322653Z" level=info msg="NRI interface is disabled by configuration." Jul 10 00:36:47.138398 containerd[1431]: time="2025-07-10T00:36:47.138335093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 10 00:36:47.139416 containerd[1431]: time="2025-07-10T00:36:47.138794053Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 10 00:36:47.139416 containerd[1431]: time="2025-07-10T00:36:47.138860333Z" level=info msg="Connect containerd service" Jul 10 00:36:47.139416 containerd[1431]: time="2025-07-10T00:36:47.138887773Z" level=info msg="using legacy CRI server" Jul 10 00:36:47.139416 containerd[1431]: time="2025-07-10T00:36:47.138895533Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 00:36:47.139416 containerd[1431]: time="2025-07-10T00:36:47.138975893Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 10 00:36:47.139647 containerd[1431]: time="2025-07-10T00:36:47.139612413Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:36:47.140259 containerd[1431]: time="2025-07-10T00:36:47.140109933Z" level=info msg="Start subscribing containerd event" Jul 10 00:36:47.140259 containerd[1431]: time="2025-07-10T00:36:47.140159293Z" level=info msg="Start recovering state" Jul 10 00:36:47.140491 containerd[1431]: time="2025-07-10T00:36:47.140472213Z" level=info msg="Start event monitor" Jul 10 00:36:47.141122 containerd[1431]: time="2025-07-10T00:36:47.140561813Z" level=info msg="Start snapshots syncer" Jul 10 00:36:47.141122 containerd[1431]: time="2025-07-10T00:36:47.140578453Z" level=info msg="Start cni network conf syncer for default" Jul 10 00:36:47.141122 containerd[1431]: time="2025-07-10T00:36:47.140593933Z" level=info msg="Start streaming server" Jul 10 00:36:47.141122 containerd[1431]: time="2025-07-10T00:36:47.140478333Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 00:36:47.141122 containerd[1431]: time="2025-07-10T00:36:47.140799493Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 00:36:47.141122 containerd[1431]: time="2025-07-10T00:36:47.140845933Z" level=info msg="containerd successfully booted in 0.046392s" Jul 10 00:36:47.140951 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 00:36:47.141142 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 00:36:47.142201 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 00:36:47.145239 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 00:36:47.159842 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 00:36:47.175703 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 00:36:47.177933 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 10 00:36:47.179568 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 00:36:47.231989 tar[1430]: linux-arm64/LICENSE Jul 10 00:36:47.232195 tar[1430]: linux-arm64/README.md Jul 10 00:36:47.246413 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 00:36:48.212556 systemd-networkd[1370]: eth0: Gained IPv6LL Jul 10 00:36:48.218266 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 00:36:48.219981 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 00:36:48.232704 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 10 00:36:48.234862 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:36:48.236939 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 00:36:48.251722 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 10 00:36:48.253403 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 10 00:36:48.255216 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 00:36:48.266543 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 00:36:48.823614 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:36:48.824894 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 00:36:48.828169 (kubelet)[1520]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:36:48.829542 systemd[1]: Startup finished in 554ms (kernel) + 4.870s (initrd) + 3.735s (userspace) = 9.160s. Jul 10 00:36:49.295063 kubelet[1520]: E0710 00:36:49.294964 1520 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:36:49.297224 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:36:49.297383 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:36:52.571750 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 00:36:52.573027 systemd[1]: Started sshd@0-10.0.0.112:22-10.0.0.1:34460.service - OpenSSH per-connection server daemon (10.0.0.1:34460). Jul 10 00:36:52.656476 sshd[1534]: Accepted publickey for core from 10.0.0.1 port 34460 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:36:52.658597 sshd[1534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:36:52.674603 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 00:36:52.688694 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 00:36:52.690347 systemd-logind[1418]: New session 1 of user core. Jul 10 00:36:52.698442 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 00:36:52.700825 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 00:36:52.707405 (systemd)[1538]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:36:52.785623 systemd[1538]: Queued start job for default target default.target. Jul 10 00:36:52.796303 systemd[1538]: Created slice app.slice - User Application Slice. Jul 10 00:36:52.796334 systemd[1538]: Reached target paths.target - Paths. Jul 10 00:36:52.796347 systemd[1538]: Reached target timers.target - Timers. Jul 10 00:36:52.797602 systemd[1538]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 00:36:52.808116 systemd[1538]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 00:36:52.808178 systemd[1538]: Reached target sockets.target - Sockets. Jul 10 00:36:52.808190 systemd[1538]: Reached target basic.target - Basic System. Jul 10 00:36:52.808224 systemd[1538]: Reached target default.target - Main User Target. Jul 10 00:36:52.808250 systemd[1538]: Startup finished in 95ms. Jul 10 00:36:52.808576 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 00:36:52.809839 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 00:36:52.872497 systemd[1]: Started sshd@1-10.0.0.112:22-10.0.0.1:34462.service - OpenSSH per-connection server daemon (10.0.0.1:34462). Jul 10 00:36:52.931132 sshd[1549]: Accepted publickey for core from 10.0.0.1 port 34462 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:36:52.932606 sshd[1549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:36:52.936337 systemd-logind[1418]: New session 2 of user core. Jul 10 00:36:52.959566 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 00:36:53.011035 sshd[1549]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:53.020705 systemd[1]: sshd@1-10.0.0.112:22-10.0.0.1:34462.service: Deactivated successfully. Jul 10 00:36:53.022687 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 00:36:53.025571 systemd-logind[1418]: Session 2 logged out. Waiting for processes to exit. Jul 10 00:36:53.025892 systemd[1]: Started sshd@2-10.0.0.112:22-10.0.0.1:34476.service - OpenSSH per-connection server daemon (10.0.0.1:34476). Jul 10 00:36:53.027032 systemd-logind[1418]: Removed session 2. Jul 10 00:36:53.063920 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 34476 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:36:53.065227 sshd[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:36:53.069955 systemd-logind[1418]: New session 3 of user core. Jul 10 00:36:53.076498 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 00:36:53.124820 sshd[1556]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:53.133732 systemd[1]: sshd@2-10.0.0.112:22-10.0.0.1:34476.service: Deactivated successfully. Jul 10 00:36:53.135032 systemd[1]: session-3.scope: Deactivated successfully. Jul 10 00:36:53.136237 systemd-logind[1418]: Session 3 logged out. Waiting for processes to exit. Jul 10 00:36:53.137409 systemd[1]: Started sshd@3-10.0.0.112:22-10.0.0.1:34490.service - OpenSSH per-connection server daemon (10.0.0.1:34490). Jul 10 00:36:53.138123 systemd-logind[1418]: Removed session 3. Jul 10 00:36:53.190605 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 34490 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:36:53.192250 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:36:53.195897 systemd-logind[1418]: New session 4 of user core. Jul 10 00:36:53.204566 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 00:36:53.258077 sshd[1563]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:53.266791 systemd[1]: sshd@3-10.0.0.112:22-10.0.0.1:34490.service: Deactivated successfully. Jul 10 00:36:53.268237 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 00:36:53.270826 systemd-logind[1418]: Session 4 logged out. Waiting for processes to exit. Jul 10 00:36:53.274175 systemd[1]: Started sshd@4-10.0.0.112:22-10.0.0.1:34502.service - OpenSSH per-connection server daemon (10.0.0.1:34502). Jul 10 00:36:53.274936 systemd-logind[1418]: Removed session 4. Jul 10 00:36:53.305945 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 34502 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:36:53.307190 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:36:53.311543 systemd-logind[1418]: New session 5 of user core. Jul 10 00:36:53.320655 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 00:36:53.380646 sudo[1573]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 00:36:53.380935 sudo[1573]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:36:53.394314 sudo[1573]: pam_unix(sudo:session): session closed for user root Jul 10 00:36:53.396044 sshd[1570]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:53.410891 systemd[1]: sshd@4-10.0.0.112:22-10.0.0.1:34502.service: Deactivated successfully. Jul 10 00:36:53.412391 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 00:36:53.415047 systemd-logind[1418]: Session 5 logged out. Waiting for processes to exit. Jul 10 00:36:53.435746 systemd[1]: Started sshd@5-10.0.0.112:22-10.0.0.1:34508.service - OpenSSH per-connection server daemon (10.0.0.1:34508). Jul 10 00:36:53.437534 systemd-logind[1418]: Removed session 5. Jul 10 00:36:53.464988 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 34508 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:36:53.466323 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:36:53.470978 systemd-logind[1418]: New session 6 of user core. Jul 10 00:36:53.478611 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 00:36:53.530312 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 00:36:53.530616 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:36:53.533530 sudo[1582]: pam_unix(sudo:session): session closed for user root Jul 10 00:36:53.537951 sudo[1581]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 10 00:36:53.538489 sudo[1581]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:36:53.555670 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 10 00:36:53.556954 auditctl[1585]: No rules Jul 10 00:36:53.557817 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:36:53.558021 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 10 00:36:53.559847 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 10 00:36:53.583444 augenrules[1603]: No rules Jul 10 00:36:53.585471 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 10 00:36:53.587052 sudo[1581]: pam_unix(sudo:session): session closed for user root Jul 10 00:36:53.588753 sshd[1578]: pam_unix(sshd:session): session closed for user core Jul 10 00:36:53.600913 systemd[1]: sshd@5-10.0.0.112:22-10.0.0.1:34508.service: Deactivated successfully. Jul 10 00:36:53.602462 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 00:36:53.603718 systemd-logind[1418]: Session 6 logged out. Waiting for processes to exit. Jul 10 00:36:53.610688 systemd[1]: Started sshd@6-10.0.0.112:22-10.0.0.1:34520.service - OpenSSH per-connection server daemon (10.0.0.1:34520). Jul 10 00:36:53.611548 systemd-logind[1418]: Removed session 6. Jul 10 00:36:53.639721 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 34520 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:36:53.641823 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:36:53.646324 systemd-logind[1418]: New session 7 of user core. Jul 10 00:36:53.655532 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 00:36:53.707441 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 00:36:53.708066 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:36:54.017755 (dockerd)[1632]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 00:36:54.018106 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 00:36:54.303009 dockerd[1632]: time="2025-07-10T00:36:54.302878853Z" level=info msg="Starting up" Jul 10 00:36:54.463619 dockerd[1632]: time="2025-07-10T00:36:54.463550053Z" level=info msg="Loading containers: start." Jul 10 00:36:54.583485 kernel: Initializing XFRM netlink socket Jul 10 00:36:54.665798 systemd-networkd[1370]: docker0: Link UP Jul 10 00:36:54.686847 dockerd[1632]: time="2025-07-10T00:36:54.686803933Z" level=info msg="Loading containers: done." Jul 10 00:36:54.702136 dockerd[1632]: time="2025-07-10T00:36:54.702050613Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 00:36:54.702312 dockerd[1632]: time="2025-07-10T00:36:54.702182973Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 10 00:36:54.702312 dockerd[1632]: time="2025-07-10T00:36:54.702302973Z" level=info msg="Daemon has completed initialization" Jul 10 00:36:54.738961 dockerd[1632]: time="2025-07-10T00:36:54.738677133Z" level=info msg="API listen on /run/docker.sock" Jul 10 00:36:54.738918 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 00:36:55.446495 containerd[1431]: time="2025-07-10T00:36:55.446445133Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 10 00:36:56.130805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4162510713.mount: Deactivated successfully. Jul 10 00:36:56.994864 containerd[1431]: time="2025-07-10T00:36:56.994802493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:36:56.995442 containerd[1431]: time="2025-07-10T00:36:56.995384133Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651795" Jul 10 00:36:56.996176 containerd[1431]: time="2025-07-10T00:36:56.996107773Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:36:56.999863 containerd[1431]: time="2025-07-10T00:36:56.999811013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:36:57.001789 containerd[1431]: time="2025-07-10T00:36:57.001738413Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 1.55524876s" Jul 10 00:36:57.001845 containerd[1431]: time="2025-07-10T00:36:57.001791493Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 10 00:36:57.005008 containerd[1431]: time="2025-07-10T00:36:57.004959973Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 10 00:36:58.062147 containerd[1431]: time="2025-07-10T00:36:58.062092693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:36:58.062732 containerd[1431]: time="2025-07-10T00:36:58.062695773Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459679" Jul 10 00:36:58.063652 containerd[1431]: time="2025-07-10T00:36:58.063603333Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:36:58.066619 containerd[1431]: time="2025-07-10T00:36:58.066566053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:36:58.068410 containerd[1431]: time="2025-07-10T00:36:58.068222853Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.06322276s" Jul 10 00:36:58.068410 containerd[1431]: time="2025-07-10T00:36:58.068261013Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 10 00:36:58.068807 containerd[1431]: time="2025-07-10T00:36:58.068784053Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 10 00:36:59.124867 containerd[1431]: time="2025-07-10T00:36:59.124817413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:36:59.125914 containerd[1431]: time="2025-07-10T00:36:59.125890413Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125068" Jul 10 00:36:59.126909 containerd[1431]: time="2025-07-10T00:36:59.126852373Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:36:59.130966 containerd[1431]: time="2025-07-10T00:36:59.130303973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:36:59.131790 containerd[1431]: time="2025-07-10T00:36:59.131401053Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.0625856s" Jul 10 00:36:59.131790 containerd[1431]: time="2025-07-10T00:36:59.131441613Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 10 00:36:59.131948 containerd[1431]: time="2025-07-10T00:36:59.131908453Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 10 00:36:59.547755 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 00:36:59.558568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:36:59.669976 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:36:59.676666 (kubelet)[1849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:36:59.724768 kubelet[1849]: E0710 00:36:59.724375 1849 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:36:59.727941 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:36:59.728107 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:37:00.141933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2912347089.mount: Deactivated successfully. Jul 10 00:37:00.500371 containerd[1431]: time="2025-07-10T00:37:00.500231093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:00.501047 containerd[1431]: time="2025-07-10T00:37:00.501002453Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915959" Jul 10 00:37:00.501878 containerd[1431]: time="2025-07-10T00:37:00.501838693Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:00.504085 containerd[1431]: time="2025-07-10T00:37:00.504019013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:00.504803 containerd[1431]: time="2025-07-10T00:37:00.504721053Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.3727746s" Jul 10 00:37:00.504803 containerd[1431]: time="2025-07-10T00:37:00.504755653Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 10 00:37:00.505255 containerd[1431]: time="2025-07-10T00:37:00.505224213Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 10 00:37:00.962872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3707648371.mount: Deactivated successfully. Jul 10 00:37:01.787283 containerd[1431]: time="2025-07-10T00:37:01.787074493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:01.788143 containerd[1431]: time="2025-07-10T00:37:01.787904013Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 10 00:37:01.788754 containerd[1431]: time="2025-07-10T00:37:01.788718253Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:01.792351 containerd[1431]: time="2025-07-10T00:37:01.792280893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:01.793715 containerd[1431]: time="2025-07-10T00:37:01.793627933Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.28837492s" Jul 10 00:37:01.793715 containerd[1431]: time="2025-07-10T00:37:01.793666573Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 10 00:37:01.794183 containerd[1431]: time="2025-07-10T00:37:01.794148133Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 00:37:02.215668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1867980522.mount: Deactivated successfully. Jul 10 00:37:02.220827 containerd[1431]: time="2025-07-10T00:37:02.220782613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:02.221431 containerd[1431]: time="2025-07-10T00:37:02.221316933Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 10 00:37:02.222405 containerd[1431]: time="2025-07-10T00:37:02.222353413Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:02.224777 containerd[1431]: time="2025-07-10T00:37:02.224709573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:02.225697 containerd[1431]: time="2025-07-10T00:37:02.225614653Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 431.43592ms" Jul 10 00:37:02.225697 containerd[1431]: time="2025-07-10T00:37:02.225646693Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 10 00:37:02.226294 containerd[1431]: time="2025-07-10T00:37:02.226252333Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 10 00:37:02.764739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2128064797.mount: Deactivated successfully. Jul 10 00:37:04.246529 containerd[1431]: time="2025-07-10T00:37:04.246473053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:04.247563 containerd[1431]: time="2025-07-10T00:37:04.247225013Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Jul 10 00:37:04.248415 containerd[1431]: time="2025-07-10T00:37:04.248383253Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:04.251854 containerd[1431]: time="2025-07-10T00:37:04.251804653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:04.253202 containerd[1431]: time="2025-07-10T00:37:04.253162413Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.02687944s" Jul 10 00:37:04.253241 containerd[1431]: time="2025-07-10T00:37:04.253201333Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 10 00:37:08.954286 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:37:08.964575 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:37:08.984725 systemd[1]: Reloading requested from client PID 2007 ('systemctl') (unit session-7.scope)... Jul 10 00:37:08.984745 systemd[1]: Reloading... Jul 10 00:37:09.053392 zram_generator::config[2049]: No configuration found. Jul 10 00:37:09.188478 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:37:09.246692 systemd[1]: Reloading finished in 261 ms. Jul 10 00:37:09.311078 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 10 00:37:09.311169 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 10 00:37:09.312590 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:37:09.314266 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:37:09.417288 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:37:09.422330 (kubelet)[2092]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:37:09.458711 kubelet[2092]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:37:09.458711 kubelet[2092]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 10 00:37:09.458711 kubelet[2092]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:37:09.459056 kubelet[2092]: I0710 00:37:09.458769 2092 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:37:10.773386 kubelet[2092]: I0710 00:37:10.772462 2092 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 10 00:37:10.773386 kubelet[2092]: I0710 00:37:10.772494 2092 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:37:10.773386 kubelet[2092]: I0710 00:37:10.772722 2092 server.go:934] "Client rotation is on, will bootstrap in background" Jul 10 00:37:10.823125 kubelet[2092]: I0710 00:37:10.823090 2092 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:37:10.823255 kubelet[2092]: E0710 00:37:10.823088 2092 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.112:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:37:10.830782 kubelet[2092]: E0710 00:37:10.830729 2092 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 00:37:10.830782 kubelet[2092]: I0710 00:37:10.830767 2092 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 00:37:10.834110 kubelet[2092]: I0710 00:37:10.834073 2092 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:37:10.834853 kubelet[2092]: I0710 00:37:10.834821 2092 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 10 00:37:10.834999 kubelet[2092]: I0710 00:37:10.834960 2092 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:37:10.835153 kubelet[2092]: I0710 00:37:10.834989 2092 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:37:10.835233 kubelet[2092]: I0710 00:37:10.835208 2092 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:37:10.835233 kubelet[2092]: I0710 00:37:10.835218 2092 container_manager_linux.go:300] "Creating device plugin manager" Jul 10 00:37:10.835486 kubelet[2092]: I0710 00:37:10.835461 2092 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:37:10.837616 kubelet[2092]: I0710 00:37:10.837579 2092 kubelet.go:408] "Attempting to sync node with API server" Jul 10 00:37:10.837646 kubelet[2092]: I0710 00:37:10.837619 2092 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:37:10.837646 kubelet[2092]: I0710 00:37:10.837641 2092 kubelet.go:314] "Adding apiserver pod source" Jul 10 00:37:10.837732 kubelet[2092]: I0710 00:37:10.837715 2092 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:37:10.838798 kubelet[2092]: W0710 00:37:10.838653 2092 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Jul 10 00:37:10.838833 kubelet[2092]: E0710 00:37:10.838809 2092 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:37:10.839449 kubelet[2092]: W0710 00:37:10.839406 2092 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Jul 10 00:37:10.839474 kubelet[2092]: E0710 00:37:10.839453 2092 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:37:10.843492 kubelet[2092]: I0710 00:37:10.843468 2092 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 10 00:37:10.844197 kubelet[2092]: I0710 00:37:10.844182 2092 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:37:10.844446 kubelet[2092]: W0710 00:37:10.844433 2092 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 00:37:10.845368 kubelet[2092]: I0710 00:37:10.845340 2092 server.go:1274] "Started kubelet" Jul 10 00:37:10.846459 kubelet[2092]: I0710 00:37:10.845648 2092 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:37:10.846459 kubelet[2092]: I0710 00:37:10.845885 2092 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:37:10.846459 kubelet[2092]: I0710 00:37:10.846237 2092 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:37:10.847197 kubelet[2092]: I0710 00:37:10.847178 2092 server.go:449] "Adding debug handlers to kubelet server" Jul 10 00:37:10.847671 kubelet[2092]: I0710 00:37:10.847646 2092 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:37:10.848259 kubelet[2092]: I0710 00:37:10.848223 2092 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:37:10.849122 kubelet[2092]: E0710 00:37:10.849088 2092 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:37:10.849408 kubelet[2092]: I0710 00:37:10.849394 2092 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 10 00:37:10.849586 kubelet[2092]: I0710 00:37:10.849572 2092 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 10 00:37:10.849679 kubelet[2092]: I0710 00:37:10.849667 2092 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:37:10.850149 kubelet[2092]: W0710 00:37:10.850114 2092 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Jul 10 00:37:10.850270 kubelet[2092]: E0710 00:37:10.850238 2092 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:37:10.850479 kubelet[2092]: I0710 00:37:10.850448 2092 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:37:10.850546 kubelet[2092]: I0710 00:37:10.850527 2092 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:37:10.851468 kubelet[2092]: E0710 00:37:10.851436 2092 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:37:10.851589 kubelet[2092]: E0710 00:37:10.849943 2092 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.112:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.112:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1850bcca01b70055 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 00:37:10.845309013 +0000 UTC m=+1.419918161,LastTimestamp:2025-07-10 00:37:10.845309013 +0000 UTC m=+1.419918161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 00:37:10.851709 kubelet[2092]: E0710 00:37:10.851678 2092 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="200ms" Jul 10 00:37:10.852648 kubelet[2092]: I0710 00:37:10.852616 2092 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:37:10.862592 kubelet[2092]: I0710 00:37:10.862533 2092 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:37:10.863567 kubelet[2092]: I0710 00:37:10.863542 2092 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:37:10.863567 kubelet[2092]: I0710 00:37:10.863564 2092 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 10 00:37:10.863655 kubelet[2092]: I0710 00:37:10.863581 2092 kubelet.go:2321] "Starting kubelet main sync loop" Jul 10 00:37:10.863655 kubelet[2092]: E0710 00:37:10.863621 2092 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:37:10.864104 kubelet[2092]: W0710 00:37:10.864075 2092 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Jul 10 00:37:10.864594 kubelet[2092]: E0710 00:37:10.864568 2092 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:37:10.866354 kubelet[2092]: I0710 00:37:10.866306 2092 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 10 00:37:10.866354 kubelet[2092]: I0710 00:37:10.866333 2092 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 10 00:37:10.866688 kubelet[2092]: I0710 00:37:10.866475 2092 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:37:10.938072 kubelet[2092]: I0710 00:37:10.937998 2092 policy_none.go:49] "None policy: Start" Jul 10 00:37:10.938923 kubelet[2092]: I0710 00:37:10.938901 2092 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 10 00:37:10.938975 kubelet[2092]: I0710 00:37:10.938951 2092 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:37:10.948337 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 10 00:37:10.951625 kubelet[2092]: E0710 00:37:10.951554 2092 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:37:10.962151 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 10 00:37:10.964482 kubelet[2092]: E0710 00:37:10.964457 2092 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 00:37:10.965004 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 10 00:37:10.972087 kubelet[2092]: I0710 00:37:10.972048 2092 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:37:10.972556 kubelet[2092]: I0710 00:37:10.972243 2092 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:37:10.972556 kubelet[2092]: I0710 00:37:10.972260 2092 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:37:10.972556 kubelet[2092]: I0710 00:37:10.972494 2092 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:37:10.973630 kubelet[2092]: E0710 00:37:10.973580 2092 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 10 00:37:11.052568 kubelet[2092]: E0710 00:37:11.052440 2092 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="400ms" Jul 10 00:37:11.073529 kubelet[2092]: I0710 00:37:11.073470 2092 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:37:11.073974 kubelet[2092]: E0710 00:37:11.073947 2092 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Jul 10 00:37:11.172598 systemd[1]: Created slice kubepods-burstable-pode609212e3a39990c3c2cd9d0028c0c9b.slice - libcontainer container kubepods-burstable-pode609212e3a39990c3c2cd9d0028c0c9b.slice. Jul 10 00:37:11.199443 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 10 00:37:11.202667 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 10 00:37:11.275307 kubelet[2092]: I0710 00:37:11.275280 2092 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:37:11.275629 kubelet[2092]: E0710 00:37:11.275605 2092 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Jul 10 00:37:11.350432 kubelet[2092]: I0710 00:37:11.350307 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e609212e3a39990c3c2cd9d0028c0c9b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e609212e3a39990c3c2cd9d0028c0c9b\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:37:11.350432 kubelet[2092]: I0710 00:37:11.350351 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:37:11.350432 kubelet[2092]: I0710 00:37:11.350394 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:37:11.350432 kubelet[2092]: I0710 00:37:11.350414 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:37:11.350432 kubelet[2092]: I0710 00:37:11.350432 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:37:11.350609 kubelet[2092]: I0710 00:37:11.350450 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:37:11.350609 kubelet[2092]: I0710 00:37:11.350475 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:37:11.350609 kubelet[2092]: I0710 00:37:11.350491 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e609212e3a39990c3c2cd9d0028c0c9b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e609212e3a39990c3c2cd9d0028c0c9b\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:37:11.350609 kubelet[2092]: I0710 00:37:11.350504 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e609212e3a39990c3c2cd9d0028c0c9b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e609212e3a39990c3c2cd9d0028c0c9b\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:37:11.452996 kubelet[2092]: E0710 00:37:11.452951 2092 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="800ms" Jul 10 00:37:11.497561 kubelet[2092]: E0710 00:37:11.497479 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:11.498131 containerd[1431]: time="2025-07-10T00:37:11.498093533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e609212e3a39990c3c2cd9d0028c0c9b,Namespace:kube-system,Attempt:0,}" Jul 10 00:37:11.502427 kubelet[2092]: E0710 00:37:11.502401 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:11.502861 containerd[1431]: time="2025-07-10T00:37:11.502819973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 10 00:37:11.505076 kubelet[2092]: E0710 00:37:11.505046 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:11.505445 containerd[1431]: time="2025-07-10T00:37:11.505411333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 10 00:37:11.677451 kubelet[2092]: I0710 00:37:11.677337 2092 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:37:11.677813 kubelet[2092]: E0710 00:37:11.677782 2092 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Jul 10 00:37:12.101294 kubelet[2092]: W0710 00:37:12.101178 2092 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Jul 10 00:37:12.101294 kubelet[2092]: E0710 00:37:12.101252 2092 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:37:12.109279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2394439284.mount: Deactivated successfully. Jul 10 00:37:12.114530 containerd[1431]: time="2025-07-10T00:37:12.114473573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:37:12.117561 containerd[1431]: time="2025-07-10T00:37:12.117496573Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 10 00:37:12.119831 containerd[1431]: time="2025-07-10T00:37:12.119726813Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:37:12.121012 containerd[1431]: time="2025-07-10T00:37:12.120972253Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:37:12.121712 containerd[1431]: time="2025-07-10T00:37:12.121659773Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 10 00:37:12.122622 containerd[1431]: time="2025-07-10T00:37:12.122572133Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:37:12.123180 containerd[1431]: time="2025-07-10T00:37:12.123138093Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 10 00:37:12.125705 containerd[1431]: time="2025-07-10T00:37:12.125645773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:37:12.129000 containerd[1431]: time="2025-07-10T00:37:12.128948853Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 626.04804ms" Jul 10 00:37:12.130411 containerd[1431]: time="2025-07-10T00:37:12.130350613Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 624.87092ms" Jul 10 00:37:12.133114 containerd[1431]: time="2025-07-10T00:37:12.132947973Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 634.77416ms" Jul 10 00:37:12.212287 kubelet[2092]: W0710 00:37:12.212229 2092 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Jul 10 00:37:12.212287 kubelet[2092]: E0710 00:37:12.212283 2092 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:37:12.253426 kubelet[2092]: E0710 00:37:12.253353 2092 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.112:6443: connect: connection refused" interval="1.6s" Jul 10 00:37:12.258479 containerd[1431]: time="2025-07-10T00:37:12.258198653Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:37:12.258479 containerd[1431]: time="2025-07-10T00:37:12.258257653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:37:12.258479 containerd[1431]: time="2025-07-10T00:37:12.258287413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:12.259294 containerd[1431]: time="2025-07-10T00:37:12.259199933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:12.259370 containerd[1431]: time="2025-07-10T00:37:12.259258933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:37:12.259411 containerd[1431]: time="2025-07-10T00:37:12.259355893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:37:12.259432 containerd[1431]: time="2025-07-10T00:37:12.259401173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:12.261308 containerd[1431]: time="2025-07-10T00:37:12.260656213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:12.262139 containerd[1431]: time="2025-07-10T00:37:12.262041893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:37:12.262573 containerd[1431]: time="2025-07-10T00:37:12.262275653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:37:12.262804 containerd[1431]: time="2025-07-10T00:37:12.262756813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:12.263016 containerd[1431]: time="2025-07-10T00:37:12.262962653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:12.286514 systemd[1]: Started cri-containerd-0031e5299e7ff656c09d94d0a28b36185e5fb65693ffcdce917ec8a747e19af6.scope - libcontainer container 0031e5299e7ff656c09d94d0a28b36185e5fb65693ffcdce917ec8a747e19af6. Jul 10 00:37:12.287530 systemd[1]: Started cri-containerd-32ed245ba7327319156a192964df5360b91dd97ff69bc014512bb4d8690cea9d.scope - libcontainer container 32ed245ba7327319156a192964df5360b91dd97ff69bc014512bb4d8690cea9d. Jul 10 00:37:12.288770 systemd[1]: Started cri-containerd-9962af3afe0c45636a9d38b2d86beb0e8987d425c9ba7bb2eae05726aca2e8ae.scope - libcontainer container 9962af3afe0c45636a9d38b2d86beb0e8987d425c9ba7bb2eae05726aca2e8ae. Jul 10 00:37:12.307905 kubelet[2092]: W0710 00:37:12.307838 2092 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Jul 10 00:37:12.307905 kubelet[2092]: E0710 00:37:12.307906 2092 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:37:12.324677 containerd[1431]: time="2025-07-10T00:37:12.323729293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"32ed245ba7327319156a192964df5360b91dd97ff69bc014512bb4d8690cea9d\"" Jul 10 00:37:12.325887 kubelet[2092]: E0710 00:37:12.325844 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:12.329199 containerd[1431]: time="2025-07-10T00:37:12.329158853Z" level=info msg="CreateContainer within sandbox \"32ed245ba7327319156a192964df5360b91dd97ff69bc014512bb4d8690cea9d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 00:37:12.331598 containerd[1431]: time="2025-07-10T00:37:12.329175853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e609212e3a39990c3c2cd9d0028c0c9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9962af3afe0c45636a9d38b2d86beb0e8987d425c9ba7bb2eae05726aca2e8ae\"" Jul 10 00:37:12.331659 kubelet[2092]: E0710 00:37:12.331546 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:12.333206 containerd[1431]: time="2025-07-10T00:37:12.333162933Z" level=info msg="CreateContainer within sandbox \"9962af3afe0c45636a9d38b2d86beb0e8987d425c9ba7bb2eae05726aca2e8ae\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 00:37:12.333422 containerd[1431]: time="2025-07-10T00:37:12.333355853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"0031e5299e7ff656c09d94d0a28b36185e5fb65693ffcdce917ec8a747e19af6\"" Jul 10 00:37:12.334282 kubelet[2092]: E0710 00:37:12.334209 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:12.336203 containerd[1431]: time="2025-07-10T00:37:12.336167693Z" level=info msg="CreateContainer within sandbox \"0031e5299e7ff656c09d94d0a28b36185e5fb65693ffcdce917ec8a747e19af6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 00:37:12.342941 kubelet[2092]: W0710 00:37:12.342879 2092 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.112:6443: connect: connection refused Jul 10 00:37:12.343156 kubelet[2092]: E0710 00:37:12.343127 2092 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.112:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:37:12.350985 containerd[1431]: time="2025-07-10T00:37:12.350872613Z" level=info msg="CreateContainer within sandbox \"9962af3afe0c45636a9d38b2d86beb0e8987d425c9ba7bb2eae05726aca2e8ae\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ebde9d197acebcb8ea61e50e67e4727c4ef7cfd4a29d68e50a18128345a50ed0\"" Jul 10 00:37:12.351869 containerd[1431]: time="2025-07-10T00:37:12.351530893Z" level=info msg="StartContainer for \"ebde9d197acebcb8ea61e50e67e4727c4ef7cfd4a29d68e50a18128345a50ed0\"" Jul 10 00:37:12.353868 containerd[1431]: time="2025-07-10T00:37:12.353743653Z" level=info msg="CreateContainer within sandbox \"32ed245ba7327319156a192964df5360b91dd97ff69bc014512bb4d8690cea9d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9239ad785d5458e06d5f0d5965c2888ad1b7b9b7ab60a5c931273a02fab920d4\"" Jul 10 00:37:12.354348 containerd[1431]: time="2025-07-10T00:37:12.354305653Z" level=info msg="StartContainer for \"9239ad785d5458e06d5f0d5965c2888ad1b7b9b7ab60a5c931273a02fab920d4\"" Jul 10 00:37:12.359843 containerd[1431]: time="2025-07-10T00:37:12.359738133Z" level=info msg="CreateContainer within sandbox \"0031e5299e7ff656c09d94d0a28b36185e5fb65693ffcdce917ec8a747e19af6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a60b6074d0f2f76b263289cfae24cfa64a1abb57399e30689e3239a2567aad88\"" Jul 10 00:37:12.360437 containerd[1431]: time="2025-07-10T00:37:12.360270893Z" level=info msg="StartContainer for \"a60b6074d0f2f76b263289cfae24cfa64a1abb57399e30689e3239a2567aad88\"" Jul 10 00:37:12.377578 systemd[1]: Started cri-containerd-ebde9d197acebcb8ea61e50e67e4727c4ef7cfd4a29d68e50a18128345a50ed0.scope - libcontainer container ebde9d197acebcb8ea61e50e67e4727c4ef7cfd4a29d68e50a18128345a50ed0. Jul 10 00:37:12.381159 systemd[1]: Started cri-containerd-9239ad785d5458e06d5f0d5965c2888ad1b7b9b7ab60a5c931273a02fab920d4.scope - libcontainer container 9239ad785d5458e06d5f0d5965c2888ad1b7b9b7ab60a5c931273a02fab920d4. Jul 10 00:37:12.384340 systemd[1]: Started cri-containerd-a60b6074d0f2f76b263289cfae24cfa64a1abb57399e30689e3239a2567aad88.scope - libcontainer container a60b6074d0f2f76b263289cfae24cfa64a1abb57399e30689e3239a2567aad88. Jul 10 00:37:12.418536 containerd[1431]: time="2025-07-10T00:37:12.418478453Z" level=info msg="StartContainer for \"9239ad785d5458e06d5f0d5965c2888ad1b7b9b7ab60a5c931273a02fab920d4\" returns successfully" Jul 10 00:37:12.434933 containerd[1431]: time="2025-07-10T00:37:12.434499613Z" level=info msg="StartContainer for \"ebde9d197acebcb8ea61e50e67e4727c4ef7cfd4a29d68e50a18128345a50ed0\" returns successfully" Jul 10 00:37:12.434933 containerd[1431]: time="2025-07-10T00:37:12.434581733Z" level=info msg="StartContainer for \"a60b6074d0f2f76b263289cfae24cfa64a1abb57399e30689e3239a2567aad88\" returns successfully" Jul 10 00:37:12.483712 kubelet[2092]: I0710 00:37:12.483679 2092 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:37:12.484524 kubelet[2092]: E0710 00:37:12.484488 2092 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.112:6443/api/v1/nodes\": dial tcp 10.0.0.112:6443: connect: connection refused" node="localhost" Jul 10 00:37:12.871647 kubelet[2092]: E0710 00:37:12.871604 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:12.873310 kubelet[2092]: E0710 00:37:12.873284 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:12.875485 kubelet[2092]: E0710 00:37:12.875449 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:13.878244 kubelet[2092]: E0710 00:37:13.878213 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:13.878611 kubelet[2092]: E0710 00:37:13.878490 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:14.086119 kubelet[2092]: I0710 00:37:14.085922 2092 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:37:14.122215 kubelet[2092]: E0710 00:37:14.122164 2092 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 10 00:37:14.203840 kubelet[2092]: I0710 00:37:14.203389 2092 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 10 00:37:14.203840 kubelet[2092]: E0710 00:37:14.203427 2092 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 10 00:37:14.214561 kubelet[2092]: E0710 00:37:14.214521 2092 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:37:14.315044 kubelet[2092]: E0710 00:37:14.314968 2092 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:37:14.415842 kubelet[2092]: E0710 00:37:14.415788 2092 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:37:14.516027 kubelet[2092]: E0710 00:37:14.515958 2092 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:37:14.616936 kubelet[2092]: E0710 00:37:14.616868 2092 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:37:14.717440 kubelet[2092]: E0710 00:37:14.717378 2092 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:37:14.818008 kubelet[2092]: E0710 00:37:14.817866 2092 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:37:14.918235 kubelet[2092]: E0710 00:37:14.918173 2092 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:37:15.018788 kubelet[2092]: E0710 00:37:15.018736 2092 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:37:15.119744 kubelet[2092]: E0710 00:37:15.119493 2092 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:37:15.220556 kubelet[2092]: E0710 00:37:15.220510 2092 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:37:15.841980 kubelet[2092]: I0710 00:37:15.841953 2092 apiserver.go:52] "Watching apiserver" Jul 10 00:37:15.850348 kubelet[2092]: I0710 00:37:15.850298 2092 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 10 00:37:16.376598 systemd[1]: Reloading requested from client PID 2372 ('systemctl') (unit session-7.scope)... Jul 10 00:37:16.376612 systemd[1]: Reloading... Jul 10 00:37:16.432466 kubelet[2092]: E0710 00:37:16.432217 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:16.438447 zram_generator::config[2414]: No configuration found. Jul 10 00:37:16.584118 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:37:16.652304 systemd[1]: Reloading finished in 275 ms. Jul 10 00:37:16.684956 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:37:16.698514 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:37:16.698746 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:37:16.698801 systemd[1]: kubelet.service: Consumed 1.796s CPU time, 129.2M memory peak, 0B memory swap peak. Jul 10 00:37:16.708691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:37:16.808097 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:37:16.812034 (kubelet)[2453]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:37:16.854703 kubelet[2453]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:37:16.854703 kubelet[2453]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 10 00:37:16.854703 kubelet[2453]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:37:16.855089 kubelet[2453]: I0710 00:37:16.854748 2453 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:37:16.860778 kubelet[2453]: I0710 00:37:16.860751 2453 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 10 00:37:16.860778 kubelet[2453]: I0710 00:37:16.860776 2453 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:37:16.860999 kubelet[2453]: I0710 00:37:16.860987 2453 server.go:934] "Client rotation is on, will bootstrap in background" Jul 10 00:37:16.862292 kubelet[2453]: I0710 00:37:16.862266 2453 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 10 00:37:16.866759 kubelet[2453]: I0710 00:37:16.866733 2453 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:37:16.872558 kubelet[2453]: E0710 00:37:16.871956 2453 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 00:37:16.872558 kubelet[2453]: I0710 00:37:16.872553 2453 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 00:37:16.876525 kubelet[2453]: I0710 00:37:16.876489 2453 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:37:16.876626 kubelet[2453]: I0710 00:37:16.876607 2453 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 10 00:37:16.876741 kubelet[2453]: I0710 00:37:16.876713 2453 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:37:16.876913 kubelet[2453]: I0710 00:37:16.876739 2453 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:37:16.876913 kubelet[2453]: I0710 00:37:16.876909 2453 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:37:16.876913 kubelet[2453]: I0710 00:37:16.876918 2453 container_manager_linux.go:300] "Creating device plugin manager" Jul 10 00:37:16.877053 kubelet[2453]: I0710 00:37:16.876949 2453 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:37:16.877053 kubelet[2453]: I0710 00:37:16.877038 2453 kubelet.go:408] "Attempting to sync node with API server" Jul 10 00:37:16.877053 kubelet[2453]: I0710 00:37:16.877048 2453 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:37:16.877123 kubelet[2453]: I0710 00:37:16.877066 2453 kubelet.go:314] "Adding apiserver pod source" Jul 10 00:37:16.877276 kubelet[2453]: I0710 00:37:16.877151 2453 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:37:16.878262 kubelet[2453]: I0710 00:37:16.878239 2453 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 10 00:37:16.878753 kubelet[2453]: I0710 00:37:16.878739 2453 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:37:16.887728 kubelet[2453]: I0710 00:37:16.885847 2453 server.go:1274] "Started kubelet" Jul 10 00:37:16.887728 kubelet[2453]: I0710 00:37:16.886152 2453 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:37:16.887728 kubelet[2453]: I0710 00:37:16.886162 2453 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:37:16.887728 kubelet[2453]: I0710 00:37:16.886462 2453 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:37:16.887728 kubelet[2453]: I0710 00:37:16.887217 2453 server.go:449] "Adding debug handlers to kubelet server" Jul 10 00:37:16.888533 kubelet[2453]: I0710 00:37:16.888497 2453 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:37:16.888833 kubelet[2453]: I0710 00:37:16.888810 2453 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:37:16.891942 kubelet[2453]: I0710 00:37:16.891915 2453 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 10 00:37:16.892072 kubelet[2453]: E0710 00:37:16.892054 2453 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:37:16.892887 kubelet[2453]: I0710 00:37:16.892866 2453 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 10 00:37:16.893237 kubelet[2453]: I0710 00:37:16.893217 2453 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:37:16.899648 kubelet[2453]: I0710 00:37:16.899620 2453 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:37:16.901575 kubelet[2453]: I0710 00:37:16.901441 2453 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:37:16.906667 kubelet[2453]: I0710 00:37:16.906574 2453 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:37:16.906667 kubelet[2453]: I0710 00:37:16.906603 2453 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 10 00:37:16.906667 kubelet[2453]: I0710 00:37:16.906637 2453 kubelet.go:2321] "Starting kubelet main sync loop" Jul 10 00:37:16.906761 kubelet[2453]: E0710 00:37:16.906686 2453 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:37:16.913870 kubelet[2453]: I0710 00:37:16.912489 2453 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:37:16.913870 kubelet[2453]: I0710 00:37:16.912507 2453 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:37:16.954606 kubelet[2453]: I0710 00:37:16.954580 2453 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 10 00:37:16.954772 kubelet[2453]: I0710 00:37:16.954742 2453 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 10 00:37:16.954848 kubelet[2453]: I0710 00:37:16.954839 2453 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:37:16.955028 kubelet[2453]: I0710 00:37:16.955012 2453 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 00:37:16.955104 kubelet[2453]: I0710 00:37:16.955081 2453 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 00:37:16.955164 kubelet[2453]: I0710 00:37:16.955154 2453 policy_none.go:49] "None policy: Start" Jul 10 00:37:16.955776 kubelet[2453]: I0710 00:37:16.955735 2453 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 10 00:37:16.955871 kubelet[2453]: I0710 00:37:16.955861 2453 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:37:16.956072 kubelet[2453]: I0710 00:37:16.956058 2453 state_mem.go:75] "Updated machine memory state" Jul 10 00:37:16.960557 kubelet[2453]: I0710 00:37:16.960537 2453 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:37:16.960815 kubelet[2453]: I0710 00:37:16.960797 2453 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:37:16.960899 kubelet[2453]: I0710 00:37:16.960870 2453 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:37:16.961191 kubelet[2453]: I0710 00:37:16.961156 2453 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:37:17.012883 kubelet[2453]: E0710 00:37:17.012811 2453 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 10 00:37:17.066476 kubelet[2453]: I0710 00:37:17.066443 2453 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:37:17.072520 kubelet[2453]: I0710 00:37:17.072497 2453 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 10 00:37:17.072616 kubelet[2453]: I0710 00:37:17.072564 2453 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 10 00:37:17.095982 kubelet[2453]: I0710 00:37:17.094470 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:37:17.095982 kubelet[2453]: I0710 00:37:17.094530 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e609212e3a39990c3c2cd9d0028c0c9b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e609212e3a39990c3c2cd9d0028c0c9b\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:37:17.095982 kubelet[2453]: I0710 00:37:17.094595 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:37:17.095982 kubelet[2453]: I0710 00:37:17.094625 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:37:17.095982 kubelet[2453]: I0710 00:37:17.094673 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:37:17.096195 kubelet[2453]: I0710 00:37:17.094707 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:37:17.096195 kubelet[2453]: I0710 00:37:17.094725 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e609212e3a39990c3c2cd9d0028c0c9b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e609212e3a39990c3c2cd9d0028c0c9b\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:37:17.096195 kubelet[2453]: I0710 00:37:17.094766 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e609212e3a39990c3c2cd9d0028c0c9b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e609212e3a39990c3c2cd9d0028c0c9b\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:37:17.096195 kubelet[2453]: I0710 00:37:17.095045 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:37:17.312529 kubelet[2453]: E0710 00:37:17.312490 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:17.313547 kubelet[2453]: E0710 00:37:17.313521 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:17.313607 kubelet[2453]: E0710 00:37:17.313577 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:17.878030 kubelet[2453]: I0710 00:37:17.877982 2453 apiserver.go:52] "Watching apiserver" Jul 10 00:37:17.893840 kubelet[2453]: I0710 00:37:17.893781 2453 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 10 00:37:17.934432 kubelet[2453]: E0710 00:37:17.934259 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:17.934432 kubelet[2453]: E0710 00:37:17.934328 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:17.934432 kubelet[2453]: E0710 00:37:17.934409 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:17.964399 kubelet[2453]: I0710 00:37:17.964200 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.963750893 podStartE2EDuration="963.750893ms" podCreationTimestamp="2025-07-10 00:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:37:17.950699493 +0000 UTC m=+1.135803521" watchObservedRunningTime="2025-07-10 00:37:17.963750893 +0000 UTC m=+1.148854921" Jul 10 00:37:17.964399 kubelet[2453]: I0710 00:37:17.964313 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.964307293 podStartE2EDuration="1.964307293s" podCreationTimestamp="2025-07-10 00:37:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:37:17.962526533 +0000 UTC m=+1.147630561" watchObservedRunningTime="2025-07-10 00:37:17.964307293 +0000 UTC m=+1.149411321" Jul 10 00:37:17.980834 kubelet[2453]: I0710 00:37:17.980748 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.980728053 podStartE2EDuration="980.728053ms" podCreationTimestamp="2025-07-10 00:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:37:17.971612813 +0000 UTC m=+1.156716841" watchObservedRunningTime="2025-07-10 00:37:17.980728053 +0000 UTC m=+1.165832041" Jul 10 00:37:18.935383 kubelet[2453]: E0710 00:37:18.935326 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:18.936200 kubelet[2453]: E0710 00:37:18.935886 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:21.476091 kubelet[2453]: I0710 00:37:21.476058 2453 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 00:37:21.476638 kubelet[2453]: I0710 00:37:21.476549 2453 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 00:37:21.476668 containerd[1431]: time="2025-07-10T00:37:21.476350383Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 00:37:22.480687 systemd[1]: Created slice kubepods-besteffort-pod2388e076_b740_4008_8a0f_c273f529dd1e.slice - libcontainer container kubepods-besteffort-pod2388e076_b740_4008_8a0f_c273f529dd1e.slice. Jul 10 00:37:22.530056 kubelet[2453]: I0710 00:37:22.530014 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2388e076-b740-4008-8a0f-c273f529dd1e-kube-proxy\") pod \"kube-proxy-2jkkg\" (UID: \"2388e076-b740-4008-8a0f-c273f529dd1e\") " pod="kube-system/kube-proxy-2jkkg" Jul 10 00:37:22.530056 kubelet[2453]: I0710 00:37:22.530056 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2388e076-b740-4008-8a0f-c273f529dd1e-xtables-lock\") pod \"kube-proxy-2jkkg\" (UID: \"2388e076-b740-4008-8a0f-c273f529dd1e\") " pod="kube-system/kube-proxy-2jkkg" Jul 10 00:37:22.530473 kubelet[2453]: I0710 00:37:22.530075 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2388e076-b740-4008-8a0f-c273f529dd1e-lib-modules\") pod \"kube-proxy-2jkkg\" (UID: \"2388e076-b740-4008-8a0f-c273f529dd1e\") " pod="kube-system/kube-proxy-2jkkg" Jul 10 00:37:22.530473 kubelet[2453]: I0710 00:37:22.530094 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5b9c\" (UniqueName: \"kubernetes.io/projected/2388e076-b740-4008-8a0f-c273f529dd1e-kube-api-access-f5b9c\") pod \"kube-proxy-2jkkg\" (UID: \"2388e076-b740-4008-8a0f-c273f529dd1e\") " pod="kube-system/kube-proxy-2jkkg" Jul 10 00:37:22.594403 systemd[1]: Created slice kubepods-besteffort-pod1d68fb22_e940_4b84_84bb_8f1ac5093bba.slice - libcontainer container kubepods-besteffort-pod1d68fb22_e940_4b84_84bb_8f1ac5093bba.slice. Jul 10 00:37:22.630675 kubelet[2453]: I0710 00:37:22.630477 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1d68fb22-e940-4b84-84bb-8f1ac5093bba-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-fh2tw\" (UID: \"1d68fb22-e940-4b84-84bb-8f1ac5093bba\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-fh2tw" Jul 10 00:37:22.630675 kubelet[2453]: I0710 00:37:22.630524 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncp5t\" (UniqueName: \"kubernetes.io/projected/1d68fb22-e940-4b84-84bb-8f1ac5093bba-kube-api-access-ncp5t\") pod \"tigera-operator-5bf8dfcb4-fh2tw\" (UID: \"1d68fb22-e940-4b84-84bb-8f1ac5093bba\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-fh2tw" Jul 10 00:37:22.789326 kubelet[2453]: E0710 00:37:22.789280 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:22.790431 containerd[1431]: time="2025-07-10T00:37:22.789917120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2jkkg,Uid:2388e076-b740-4008-8a0f-c273f529dd1e,Namespace:kube-system,Attempt:0,}" Jul 10 00:37:22.809314 containerd[1431]: time="2025-07-10T00:37:22.809189408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:37:22.809314 containerd[1431]: time="2025-07-10T00:37:22.809249809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:37:22.809314 containerd[1431]: time="2025-07-10T00:37:22.809268049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:22.809568 containerd[1431]: time="2025-07-10T00:37:22.809414970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:22.829553 systemd[1]: Started cri-containerd-51f1b69b31211dea2cabf22d2bceffa6fa47b9c1b1e821823ccea26e1a9d0b5a.scope - libcontainer container 51f1b69b31211dea2cabf22d2bceffa6fa47b9c1b1e821823ccea26e1a9d0b5a. Jul 10 00:37:22.849264 containerd[1431]: time="2025-07-10T00:37:22.849211639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2jkkg,Uid:2388e076-b740-4008-8a0f-c273f529dd1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"51f1b69b31211dea2cabf22d2bceffa6fa47b9c1b1e821823ccea26e1a9d0b5a\"" Jul 10 00:37:22.850170 kubelet[2453]: E0710 00:37:22.849957 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:22.852059 containerd[1431]: time="2025-07-10T00:37:22.852027350Z" level=info msg="CreateContainer within sandbox \"51f1b69b31211dea2cabf22d2bceffa6fa47b9c1b1e821823ccea26e1a9d0b5a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 00:37:22.865645 containerd[1431]: time="2025-07-10T00:37:22.865600656Z" level=info msg="CreateContainer within sandbox \"51f1b69b31211dea2cabf22d2bceffa6fa47b9c1b1e821823ccea26e1a9d0b5a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"de90f63eb25499d389adbd5a1a31f7d57d8be51e79adba334882aede7fa67249\"" Jul 10 00:37:22.866483 containerd[1431]: time="2025-07-10T00:37:22.866445225Z" level=info msg="StartContainer for \"de90f63eb25499d389adbd5a1a31f7d57d8be51e79adba334882aede7fa67249\"" Jul 10 00:37:22.890510 systemd[1]: Started cri-containerd-de90f63eb25499d389adbd5a1a31f7d57d8be51e79adba334882aede7fa67249.scope - libcontainer container de90f63eb25499d389adbd5a1a31f7d57d8be51e79adba334882aede7fa67249. Jul 10 00:37:22.899843 containerd[1431]: time="2025-07-10T00:37:22.899772105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-fh2tw,Uid:1d68fb22-e940-4b84-84bb-8f1ac5093bba,Namespace:tigera-operator,Attempt:0,}" Jul 10 00:37:22.920502 containerd[1431]: time="2025-07-10T00:37:22.920450727Z" level=info msg="StartContainer for \"de90f63eb25499d389adbd5a1a31f7d57d8be51e79adba334882aede7fa67249\" returns successfully" Jul 10 00:37:22.923375 containerd[1431]: time="2025-07-10T00:37:22.922879074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:37:22.923375 containerd[1431]: time="2025-07-10T00:37:22.922929834Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:37:22.923375 containerd[1431]: time="2025-07-10T00:37:22.922945954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:22.923375 containerd[1431]: time="2025-07-10T00:37:22.923015555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:22.942528 kubelet[2453]: E0710 00:37:22.942496 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:22.945731 systemd[1]: Started cri-containerd-cf22f7b7e07d73f00e87f64c4109f118a231544963b814bab1df95f10f0367c8.scope - libcontainer container cf22f7b7e07d73f00e87f64c4109f118a231544963b814bab1df95f10f0367c8. Jul 10 00:37:22.982187 containerd[1431]: time="2025-07-10T00:37:22.982150833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-fh2tw,Uid:1d68fb22-e940-4b84-84bb-8f1ac5093bba,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"cf22f7b7e07d73f00e87f64c4109f118a231544963b814bab1df95f10f0367c8\"" Jul 10 00:37:22.984169 containerd[1431]: time="2025-07-10T00:37:22.984050253Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 10 00:37:24.166887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1846875561.mount: Deactivated successfully. Jul 10 00:37:24.449132 containerd[1431]: time="2025-07-10T00:37:24.448889786Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:24.449691 containerd[1431]: time="2025-07-10T00:37:24.449583593Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 10 00:37:24.450395 containerd[1431]: time="2025-07-10T00:37:24.450322120Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:24.452481 containerd[1431]: time="2025-07-10T00:37:24.452449940Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:24.453580 containerd[1431]: time="2025-07-10T00:37:24.453537630Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.469427496s" Jul 10 00:37:24.453632 containerd[1431]: time="2025-07-10T00:37:24.453578111Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 10 00:37:24.455580 containerd[1431]: time="2025-07-10T00:37:24.455542209Z" level=info msg="CreateContainer within sandbox \"cf22f7b7e07d73f00e87f64c4109f118a231544963b814bab1df95f10f0367c8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 10 00:37:24.466546 containerd[1431]: time="2025-07-10T00:37:24.466505393Z" level=info msg="CreateContainer within sandbox \"cf22f7b7e07d73f00e87f64c4109f118a231544963b814bab1df95f10f0367c8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8904c5c4286d76d0513c6932c54dea479101607c1db42be94385dcf983b662b9\"" Jul 10 00:37:24.466952 containerd[1431]: time="2025-07-10T00:37:24.466895757Z" level=info msg="StartContainer for \"8904c5c4286d76d0513c6932c54dea479101607c1db42be94385dcf983b662b9\"" Jul 10 00:37:24.495513 systemd[1]: Started cri-containerd-8904c5c4286d76d0513c6932c54dea479101607c1db42be94385dcf983b662b9.scope - libcontainer container 8904c5c4286d76d0513c6932c54dea479101607c1db42be94385dcf983b662b9. Jul 10 00:37:24.524035 containerd[1431]: time="2025-07-10T00:37:24.523922337Z" level=info msg="StartContainer for \"8904c5c4286d76d0513c6932c54dea479101607c1db42be94385dcf983b662b9\" returns successfully" Jul 10 00:37:24.957238 kubelet[2453]: I0710 00:37:24.957192 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2jkkg" podStartSLOduration=2.9571768819999997 podStartE2EDuration="2.957176882s" podCreationTimestamp="2025-07-10 00:37:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:37:22.952602994 +0000 UTC m=+6.137706982" watchObservedRunningTime="2025-07-10 00:37:24.957176882 +0000 UTC m=+8.142280910" Jul 10 00:37:26.885081 kubelet[2453]: E0710 00:37:26.884903 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:26.904409 kubelet[2453]: I0710 00:37:26.902893 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-fh2tw" podStartSLOduration=3.431795458 podStartE2EDuration="4.902873851s" podCreationTimestamp="2025-07-10 00:37:22 +0000 UTC" firstStartedPulling="2025-07-10 00:37:22.983266445 +0000 UTC m=+6.168370433" lastFinishedPulling="2025-07-10 00:37:24.454344798 +0000 UTC m=+7.639448826" observedRunningTime="2025-07-10 00:37:24.958141211 +0000 UTC m=+8.143245239" watchObservedRunningTime="2025-07-10 00:37:26.902873851 +0000 UTC m=+10.087977879" Jul 10 00:37:26.952246 kubelet[2453]: E0710 00:37:26.952184 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:27.284968 kubelet[2453]: E0710 00:37:27.284869 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:27.956785 kubelet[2453]: E0710 00:37:27.956750 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:27.957153 kubelet[2453]: E0710 00:37:27.956759 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:28.127025 kubelet[2453]: E0710 00:37:28.126980 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:29.977564 sudo[1614]: pam_unix(sudo:session): session closed for user root Jul 10 00:37:29.983795 sshd[1611]: pam_unix(sshd:session): session closed for user core Jul 10 00:37:29.990353 systemd[1]: sshd@6-10.0.0.112:22-10.0.0.1:34520.service: Deactivated successfully. Jul 10 00:37:29.997048 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 00:37:29.997235 systemd[1]: session-7.scope: Consumed 6.666s CPU time, 152.0M memory peak, 0B memory swap peak. Jul 10 00:37:30.000838 systemd-logind[1418]: Session 7 logged out. Waiting for processes to exit. Jul 10 00:37:30.004173 systemd-logind[1418]: Removed session 7. Jul 10 00:37:31.671476 update_engine[1420]: I20250710 00:37:31.671391 1420 update_attempter.cc:509] Updating boot flags... Jul 10 00:37:31.765613 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2864) Jul 10 00:37:31.854400 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2862) Jul 10 00:37:36.520148 systemd[1]: Created slice kubepods-besteffort-podb89d2de8_e60b_47ab_a792_70fb5ad54278.slice - libcontainer container kubepods-besteffort-podb89d2de8_e60b_47ab_a792_70fb5ad54278.slice. Jul 10 00:37:36.538575 kubelet[2453]: I0710 00:37:36.538495 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b89d2de8-e60b-47ab-a792-70fb5ad54278-tigera-ca-bundle\") pod \"calico-typha-777fcf6c95-mfdwh\" (UID: \"b89d2de8-e60b-47ab-a792-70fb5ad54278\") " pod="calico-system/calico-typha-777fcf6c95-mfdwh" Jul 10 00:37:36.538575 kubelet[2453]: I0710 00:37:36.538533 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b89d2de8-e60b-47ab-a792-70fb5ad54278-typha-certs\") pod \"calico-typha-777fcf6c95-mfdwh\" (UID: \"b89d2de8-e60b-47ab-a792-70fb5ad54278\") " pod="calico-system/calico-typha-777fcf6c95-mfdwh" Jul 10 00:37:36.538575 kubelet[2453]: I0710 00:37:36.538552 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npss2\" (UniqueName: \"kubernetes.io/projected/b89d2de8-e60b-47ab-a792-70fb5ad54278-kube-api-access-npss2\") pod \"calico-typha-777fcf6c95-mfdwh\" (UID: \"b89d2de8-e60b-47ab-a792-70fb5ad54278\") " pod="calico-system/calico-typha-777fcf6c95-mfdwh" Jul 10 00:37:36.824630 kubelet[2453]: E0710 00:37:36.824043 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:36.825192 containerd[1431]: time="2025-07-10T00:37:36.825111062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-777fcf6c95-mfdwh,Uid:b89d2de8-e60b-47ab-a792-70fb5ad54278,Namespace:calico-system,Attempt:0,}" Jul 10 00:37:36.852089 containerd[1431]: time="2025-07-10T00:37:36.851965939Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:37:36.852089 containerd[1431]: time="2025-07-10T00:37:36.852018299Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:37:36.852089 containerd[1431]: time="2025-07-10T00:37:36.852041939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:36.852409 containerd[1431]: time="2025-07-10T00:37:36.852130580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:36.871651 systemd[1]: Created slice kubepods-besteffort-pod257c723b_2f49_4823_a7e3_00017c25a1e1.slice - libcontainer container kubepods-besteffort-pod257c723b_2f49_4823_a7e3_00017c25a1e1.slice. Jul 10 00:37:36.897549 systemd[1]: Started cri-containerd-7df8331a35e2f244f54cf97a4e519666260e513da03541c8e855d5c5b991d295.scope - libcontainer container 7df8331a35e2f244f54cf97a4e519666260e513da03541c8e855d5c5b991d295. Jul 10 00:37:36.928571 containerd[1431]: time="2025-07-10T00:37:36.928523833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-777fcf6c95-mfdwh,Uid:b89d2de8-e60b-47ab-a792-70fb5ad54278,Namespace:calico-system,Attempt:0,} returns sandbox id \"7df8331a35e2f244f54cf97a4e519666260e513da03541c8e855d5c5b991d295\"" Jul 10 00:37:36.929338 kubelet[2453]: E0710 00:37:36.929306 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:36.931776 containerd[1431]: time="2025-07-10T00:37:36.931543206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 10 00:37:36.942274 kubelet[2453]: I0710 00:37:36.942233 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/257c723b-2f49-4823-a7e3-00017c25a1e1-cni-bin-dir\") pod \"calico-node-xjwnj\" (UID: \"257c723b-2f49-4823-a7e3-00017c25a1e1\") " pod="calico-system/calico-node-xjwnj" Jul 10 00:37:36.942274 kubelet[2453]: I0710 00:37:36.942276 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/257c723b-2f49-4823-a7e3-00017c25a1e1-lib-modules\") pod \"calico-node-xjwnj\" (UID: \"257c723b-2f49-4823-a7e3-00017c25a1e1\") " pod="calico-system/calico-node-xjwnj" Jul 10 00:37:36.942426 kubelet[2453]: I0710 00:37:36.942301 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/257c723b-2f49-4823-a7e3-00017c25a1e1-var-run-calico\") pod \"calico-node-xjwnj\" (UID: \"257c723b-2f49-4823-a7e3-00017c25a1e1\") " pod="calico-system/calico-node-xjwnj" Jul 10 00:37:36.942426 kubelet[2453]: I0710 00:37:36.942321 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/257c723b-2f49-4823-a7e3-00017c25a1e1-tigera-ca-bundle\") pod \"calico-node-xjwnj\" (UID: \"257c723b-2f49-4823-a7e3-00017c25a1e1\") " pod="calico-system/calico-node-xjwnj" Jul 10 00:37:36.942426 kubelet[2453]: I0710 00:37:36.942345 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/257c723b-2f49-4823-a7e3-00017c25a1e1-xtables-lock\") pod \"calico-node-xjwnj\" (UID: \"257c723b-2f49-4823-a7e3-00017c25a1e1\") " pod="calico-system/calico-node-xjwnj" Jul 10 00:37:36.942426 kubelet[2453]: I0710 00:37:36.942371 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/257c723b-2f49-4823-a7e3-00017c25a1e1-policysync\") pod \"calico-node-xjwnj\" (UID: \"257c723b-2f49-4823-a7e3-00017c25a1e1\") " pod="calico-system/calico-node-xjwnj" Jul 10 00:37:36.942518 kubelet[2453]: I0710 00:37:36.942422 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/257c723b-2f49-4823-a7e3-00017c25a1e1-flexvol-driver-host\") pod \"calico-node-xjwnj\" (UID: \"257c723b-2f49-4823-a7e3-00017c25a1e1\") " pod="calico-system/calico-node-xjwnj" Jul 10 00:37:36.942518 kubelet[2453]: I0710 00:37:36.942465 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/257c723b-2f49-4823-a7e3-00017c25a1e1-node-certs\") pod \"calico-node-xjwnj\" (UID: \"257c723b-2f49-4823-a7e3-00017c25a1e1\") " pod="calico-system/calico-node-xjwnj" Jul 10 00:37:36.942518 kubelet[2453]: I0710 00:37:36.942486 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/257c723b-2f49-4823-a7e3-00017c25a1e1-cni-net-dir\") pod \"calico-node-xjwnj\" (UID: \"257c723b-2f49-4823-a7e3-00017c25a1e1\") " pod="calico-system/calico-node-xjwnj" Jul 10 00:37:36.942518 kubelet[2453]: I0710 00:37:36.942501 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/257c723b-2f49-4823-a7e3-00017c25a1e1-var-lib-calico\") pod \"calico-node-xjwnj\" (UID: \"257c723b-2f49-4823-a7e3-00017c25a1e1\") " pod="calico-system/calico-node-xjwnj" Jul 10 00:37:36.942518 kubelet[2453]: I0710 00:37:36.942518 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/257c723b-2f49-4823-a7e3-00017c25a1e1-cni-log-dir\") pod \"calico-node-xjwnj\" (UID: \"257c723b-2f49-4823-a7e3-00017c25a1e1\") " pod="calico-system/calico-node-xjwnj" Jul 10 00:37:36.942665 kubelet[2453]: I0710 00:37:36.942535 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrzcg\" (UniqueName: \"kubernetes.io/projected/257c723b-2f49-4823-a7e3-00017c25a1e1-kube-api-access-zrzcg\") pod \"calico-node-xjwnj\" (UID: \"257c723b-2f49-4823-a7e3-00017c25a1e1\") " pod="calico-system/calico-node-xjwnj" Jul 10 00:37:37.044066 kubelet[2453]: E0710 00:37:37.044027 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.044066 kubelet[2453]: W0710 00:37:37.044054 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.044066 kubelet[2453]: E0710 00:37:37.044084 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.044720 kubelet[2453]: E0710 00:37:37.044334 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.044720 kubelet[2453]: W0710 00:37:37.044352 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.044720 kubelet[2453]: E0710 00:37:37.044376 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.044720 kubelet[2453]: E0710 00:37:37.044530 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.044720 kubelet[2453]: W0710 00:37:37.044537 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.044720 kubelet[2453]: E0710 00:37:37.044554 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.046412 kubelet[2453]: E0710 00:37:37.046249 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.046412 kubelet[2453]: W0710 00:37:37.046264 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.046412 kubelet[2453]: E0710 00:37:37.046283 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.046537 kubelet[2453]: E0710 00:37:37.046466 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.046537 kubelet[2453]: W0710 00:37:37.046474 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.046537 kubelet[2453]: E0710 00:37:37.046484 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.046723 kubelet[2453]: E0710 00:37:37.046708 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.046723 kubelet[2453]: W0710 00:37:37.046721 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.046814 kubelet[2453]: E0710 00:37:37.046789 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.046875 kubelet[2453]: E0710 00:37:37.046864 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.046875 kubelet[2453]: W0710 00:37:37.046874 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.046921 kubelet[2453]: E0710 00:37:37.046888 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.047109 kubelet[2453]: E0710 00:37:37.047095 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.047141 kubelet[2453]: W0710 00:37:37.047109 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.047246 kubelet[2453]: E0710 00:37:37.047222 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.047338 kubelet[2453]: E0710 00:37:37.047326 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.047380 kubelet[2453]: W0710 00:37:37.047338 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.047380 kubelet[2453]: E0710 00:37:37.047352 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.047520 kubelet[2453]: E0710 00:37:37.047501 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.047520 kubelet[2453]: W0710 00:37:37.047514 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.047572 kubelet[2453]: E0710 00:37:37.047523 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.047962 kubelet[2453]: E0710 00:37:37.047947 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.047995 kubelet[2453]: W0710 00:37:37.047963 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.048021 kubelet[2453]: E0710 00:37:37.047993 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.048980 kubelet[2453]: E0710 00:37:37.048962 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.048980 kubelet[2453]: W0710 00:37:37.048979 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.049059 kubelet[2453]: E0710 00:37:37.048992 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.049502 kubelet[2453]: E0710 00:37:37.049479 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.049502 kubelet[2453]: W0710 00:37:37.049501 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.049568 kubelet[2453]: E0710 00:37:37.049513 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.053863 kubelet[2453]: E0710 00:37:37.053803 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.053863 kubelet[2453]: W0710 00:37:37.053820 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.053863 kubelet[2453]: E0710 00:37:37.053832 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.104228 kubelet[2453]: E0710 00:37:37.104102 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qd5xx" podUID="3b36436a-7d97-4120-92b3-49bbe1e5480c" Jul 10 00:37:37.130735 kubelet[2453]: E0710 00:37:37.130618 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.130735 kubelet[2453]: W0710 00:37:37.130639 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.130735 kubelet[2453]: E0710 00:37:37.130657 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.130949 kubelet[2453]: E0710 00:37:37.130936 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.131014 kubelet[2453]: W0710 00:37:37.131001 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.131178 kubelet[2453]: E0710 00:37:37.131088 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.131477 kubelet[2453]: E0710 00:37:37.131333 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.131477 kubelet[2453]: W0710 00:37:37.131346 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.131477 kubelet[2453]: E0710 00:37:37.131387 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.131640 kubelet[2453]: E0710 00:37:37.131626 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.131700 kubelet[2453]: W0710 00:37:37.131688 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.131755 kubelet[2453]: E0710 00:37:37.131745 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.132015 kubelet[2453]: E0710 00:37:37.132000 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.132087 kubelet[2453]: W0710 00:37:37.132074 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.132188 kubelet[2453]: E0710 00:37:37.132148 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.132501 kubelet[2453]: E0710 00:37:37.132484 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.132682 kubelet[2453]: W0710 00:37:37.132578 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.132682 kubelet[2453]: E0710 00:37:37.132596 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.132802 kubelet[2453]: E0710 00:37:37.132791 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.132853 kubelet[2453]: W0710 00:37:37.132843 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.132911 kubelet[2453]: E0710 00:37:37.132900 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.133118 kubelet[2453]: E0710 00:37:37.133105 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.133190 kubelet[2453]: W0710 00:37:37.133177 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.133240 kubelet[2453]: E0710 00:37:37.133229 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.133555 kubelet[2453]: E0710 00:37:37.133535 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.133741 kubelet[2453]: W0710 00:37:37.133633 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.133741 kubelet[2453]: E0710 00:37:37.133650 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.133860 kubelet[2453]: E0710 00:37:37.133848 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.133923 kubelet[2453]: W0710 00:37:37.133911 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.133979 kubelet[2453]: E0710 00:37:37.133969 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.134183 kubelet[2453]: E0710 00:37:37.134170 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.134245 kubelet[2453]: W0710 00:37:37.134233 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.134313 kubelet[2453]: E0710 00:37:37.134301 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.134550 kubelet[2453]: E0710 00:37:37.134536 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.134736 kubelet[2453]: W0710 00:37:37.134623 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.134736 kubelet[2453]: E0710 00:37:37.134640 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.134861 kubelet[2453]: E0710 00:37:37.134848 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.134915 kubelet[2453]: W0710 00:37:37.134904 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.134974 kubelet[2453]: E0710 00:37:37.134962 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.135177 kubelet[2453]: E0710 00:37:37.135164 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.135250 kubelet[2453]: W0710 00:37:37.135237 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.135315 kubelet[2453]: E0710 00:37:37.135304 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.135542 kubelet[2453]: E0710 00:37:37.135528 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.135605 kubelet[2453]: W0710 00:37:37.135593 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.135653 kubelet[2453]: E0710 00:37:37.135643 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.135947 kubelet[2453]: E0710 00:37:37.135847 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.135947 kubelet[2453]: W0710 00:37:37.135859 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.135947 kubelet[2453]: E0710 00:37:37.135868 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.136103 kubelet[2453]: E0710 00:37:37.136090 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.136152 kubelet[2453]: W0710 00:37:37.136142 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.136208 kubelet[2453]: E0710 00:37:37.136197 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.136432 kubelet[2453]: E0710 00:37:37.136418 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.136600 kubelet[2453]: W0710 00:37:37.136504 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.136600 kubelet[2453]: E0710 00:37:37.136521 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.136728 kubelet[2453]: E0710 00:37:37.136716 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.136777 kubelet[2453]: W0710 00:37:37.136767 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.136833 kubelet[2453]: E0710 00:37:37.136822 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.137027 kubelet[2453]: E0710 00:37:37.137015 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.137159 kubelet[2453]: W0710 00:37:37.137090 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.137159 kubelet[2453]: E0710 00:37:37.137105 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.143750 kubelet[2453]: E0710 00:37:37.143732 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.143750 kubelet[2453]: W0710 00:37:37.143747 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.143856 kubelet[2453]: E0710 00:37:37.143762 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.143856 kubelet[2453]: I0710 00:37:37.143788 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3b36436a-7d97-4120-92b3-49bbe1e5480c-kubelet-dir\") pod \"csi-node-driver-qd5xx\" (UID: \"3b36436a-7d97-4120-92b3-49bbe1e5480c\") " pod="calico-system/csi-node-driver-qd5xx" Jul 10 00:37:37.143992 kubelet[2453]: E0710 00:37:37.143965 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.143992 kubelet[2453]: W0710 00:37:37.143980 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.143992 kubelet[2453]: E0710 00:37:37.143990 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.144070 kubelet[2453]: I0710 00:37:37.144014 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3b36436a-7d97-4120-92b3-49bbe1e5480c-varrun\") pod \"csi-node-driver-qd5xx\" (UID: \"3b36436a-7d97-4120-92b3-49bbe1e5480c\") " pod="calico-system/csi-node-driver-qd5xx" Jul 10 00:37:37.144205 kubelet[2453]: E0710 00:37:37.144191 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.144205 kubelet[2453]: W0710 00:37:37.144204 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.144257 kubelet[2453]: E0710 00:37:37.144219 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.144257 kubelet[2453]: I0710 00:37:37.144235 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjnf8\" (UniqueName: \"kubernetes.io/projected/3b36436a-7d97-4120-92b3-49bbe1e5480c-kube-api-access-cjnf8\") pod \"csi-node-driver-qd5xx\" (UID: \"3b36436a-7d97-4120-92b3-49bbe1e5480c\") " pod="calico-system/csi-node-driver-qd5xx" Jul 10 00:37:37.144707 kubelet[2453]: E0710 00:37:37.144689 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.144707 kubelet[2453]: W0710 00:37:37.144705 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.144848 kubelet[2453]: E0710 00:37:37.144833 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.144881 kubelet[2453]: I0710 00:37:37.144863 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3b36436a-7d97-4120-92b3-49bbe1e5480c-registration-dir\") pod \"csi-node-driver-qd5xx\" (UID: \"3b36436a-7d97-4120-92b3-49bbe1e5480c\") " pod="calico-system/csi-node-driver-qd5xx" Jul 10 00:37:37.144945 kubelet[2453]: E0710 00:37:37.144933 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.144974 kubelet[2453]: W0710 00:37:37.144944 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.144974 kubelet[2453]: E0710 00:37:37.144958 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.145137 kubelet[2453]: E0710 00:37:37.145126 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.145170 kubelet[2453]: W0710 00:37:37.145138 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.145170 kubelet[2453]: E0710 00:37:37.145156 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.145489 kubelet[2453]: E0710 00:37:37.145411 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.145489 kubelet[2453]: W0710 00:37:37.145421 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.145489 kubelet[2453]: E0710 00:37:37.145443 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.145665 kubelet[2453]: E0710 00:37:37.145653 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.145665 kubelet[2453]: W0710 00:37:37.145665 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.145773 kubelet[2453]: E0710 00:37:37.145691 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.145829 kubelet[2453]: E0710 00:37:37.145817 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.145829 kubelet[2453]: W0710 00:37:37.145828 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.145928 kubelet[2453]: E0710 00:37:37.145899 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.146026 kubelet[2453]: E0710 00:37:37.146014 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.146026 kubelet[2453]: W0710 00:37:37.146024 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.146082 kubelet[2453]: E0710 00:37:37.146038 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.146082 kubelet[2453]: I0710 00:37:37.146059 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3b36436a-7d97-4120-92b3-49bbe1e5480c-socket-dir\") pod \"csi-node-driver-qd5xx\" (UID: \"3b36436a-7d97-4120-92b3-49bbe1e5480c\") " pod="calico-system/csi-node-driver-qd5xx" Jul 10 00:37:37.146233 kubelet[2453]: E0710 00:37:37.146219 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.146233 kubelet[2453]: W0710 00:37:37.146231 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.146295 kubelet[2453]: E0710 00:37:37.146243 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.146411 kubelet[2453]: E0710 00:37:37.146397 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.146411 kubelet[2453]: W0710 00:37:37.146409 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.146463 kubelet[2453]: E0710 00:37:37.146418 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.146601 kubelet[2453]: E0710 00:37:37.146591 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.146601 kubelet[2453]: W0710 00:37:37.146600 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.146661 kubelet[2453]: E0710 00:37:37.146609 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.146749 kubelet[2453]: E0710 00:37:37.146738 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.146749 kubelet[2453]: W0710 00:37:37.146748 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.146809 kubelet[2453]: E0710 00:37:37.146756 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.147498 kubelet[2453]: E0710 00:37:37.147459 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.147632 kubelet[2453]: W0710 00:37:37.147600 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.147632 kubelet[2453]: E0710 00:37:37.147622 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.177588 containerd[1431]: time="2025-07-10T00:37:37.177547033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xjwnj,Uid:257c723b-2f49-4823-a7e3-00017c25a1e1,Namespace:calico-system,Attempt:0,}" Jul 10 00:37:37.207771 containerd[1431]: time="2025-07-10T00:37:37.207336915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:37:37.207771 containerd[1431]: time="2025-07-10T00:37:37.207744277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:37:37.207771 containerd[1431]: time="2025-07-10T00:37:37.207756477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:37.208034 containerd[1431]: time="2025-07-10T00:37:37.207835717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:37.226584 systemd[1]: Started cri-containerd-3be818d02a078ca52b7f05c90a8495539fa7fb86c4102f5579a82c2a172b1b36.scope - libcontainer container 3be818d02a078ca52b7f05c90a8495539fa7fb86c4102f5579a82c2a172b1b36. Jul 10 00:37:37.247796 kubelet[2453]: E0710 00:37:37.247639 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.247796 kubelet[2453]: W0710 00:37:37.247663 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.247796 kubelet[2453]: E0710 00:37:37.247681 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.247966 containerd[1431]: time="2025-07-10T00:37:37.247737880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xjwnj,Uid:257c723b-2f49-4823-a7e3-00017c25a1e1,Namespace:calico-system,Attempt:0,} returns sandbox id \"3be818d02a078ca52b7f05c90a8495539fa7fb86c4102f5579a82c2a172b1b36\"" Jul 10 00:37:37.248375 kubelet[2453]: E0710 00:37:37.248218 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.248375 kubelet[2453]: W0710 00:37:37.248233 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.248375 kubelet[2453]: E0710 00:37:37.248250 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.248671 kubelet[2453]: E0710 00:37:37.248544 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.248671 kubelet[2453]: W0710 00:37:37.248557 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.248870 kubelet[2453]: E0710 00:37:37.248572 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.248870 kubelet[2453]: E0710 00:37:37.248763 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.248870 kubelet[2453]: W0710 00:37:37.248775 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.248971 kubelet[2453]: E0710 00:37:37.248788 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.249275 kubelet[2453]: E0710 00:37:37.249155 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.249275 kubelet[2453]: W0710 00:37:37.249169 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.249275 kubelet[2453]: E0710 00:37:37.249186 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.249482 kubelet[2453]: E0710 00:37:37.249467 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.249563 kubelet[2453]: W0710 00:37:37.249550 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.249642 kubelet[2453]: E0710 00:37:37.249629 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.250020 kubelet[2453]: E0710 00:37:37.249985 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.250069 kubelet[2453]: W0710 00:37:37.250001 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.250109 kubelet[2453]: E0710 00:37:37.250073 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.250333 kubelet[2453]: E0710 00:37:37.250318 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.250533 kubelet[2453]: W0710 00:37:37.250338 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.250533 kubelet[2453]: E0710 00:37:37.250406 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.250828 kubelet[2453]: E0710 00:37:37.250810 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.250828 kubelet[2453]: W0710 00:37:37.250823 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.251026 kubelet[2453]: E0710 00:37:37.250876 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.251114 kubelet[2453]: E0710 00:37:37.251100 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.251114 kubelet[2453]: W0710 00:37:37.251111 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.251247 kubelet[2453]: E0710 00:37:37.251156 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.251441 kubelet[2453]: E0710 00:37:37.251423 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.251441 kubelet[2453]: W0710 00:37:37.251438 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.251441 kubelet[2453]: E0710 00:37:37.251471 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.251729 kubelet[2453]: E0710 00:37:37.251714 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.251729 kubelet[2453]: W0710 00:37:37.251726 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.251800 kubelet[2453]: E0710 00:37:37.251739 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.251959 kubelet[2453]: E0710 00:37:37.251948 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.251959 kubelet[2453]: W0710 00:37:37.251959 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.252016 kubelet[2453]: E0710 00:37:37.251995 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.252239 kubelet[2453]: E0710 00:37:37.252224 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.252239 kubelet[2453]: W0710 00:37:37.252236 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.252490 kubelet[2453]: E0710 00:37:37.252280 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.252588 kubelet[2453]: E0710 00:37:37.252572 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.252588 kubelet[2453]: W0710 00:37:37.252585 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.252722 kubelet[2453]: E0710 00:37:37.252632 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.252879 kubelet[2453]: E0710 00:37:37.252865 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.252879 kubelet[2453]: W0710 00:37:37.252876 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.252960 kubelet[2453]: E0710 00:37:37.252942 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.253553 kubelet[2453]: E0710 00:37:37.253519 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.253553 kubelet[2453]: W0710 00:37:37.253534 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.253738 kubelet[2453]: E0710 00:37:37.253621 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.253874 kubelet[2453]: E0710 00:37:37.253814 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.253874 kubelet[2453]: W0710 00:37:37.253828 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.254012 kubelet[2453]: E0710 00:37:37.253919 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.254185 kubelet[2453]: E0710 00:37:37.254170 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.254185 kubelet[2453]: W0710 00:37:37.254181 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.254320 kubelet[2453]: E0710 00:37:37.254220 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.254500 kubelet[2453]: E0710 00:37:37.254485 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.254500 kubelet[2453]: W0710 00:37:37.254497 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.254590 kubelet[2453]: E0710 00:37:37.254547 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.254743 kubelet[2453]: E0710 00:37:37.254728 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.254743 kubelet[2453]: W0710 00:37:37.254740 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.254819 kubelet[2453]: E0710 00:37:37.254753 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.254987 kubelet[2453]: E0710 00:37:37.254959 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.254987 kubelet[2453]: W0710 00:37:37.254973 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.255044 kubelet[2453]: E0710 00:37:37.254988 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.256591 kubelet[2453]: E0710 00:37:37.256574 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.256591 kubelet[2453]: W0710 00:37:37.256590 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.256658 kubelet[2453]: E0710 00:37:37.256607 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.256955 kubelet[2453]: E0710 00:37:37.256941 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.256955 kubelet[2453]: W0710 00:37:37.256953 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.257129 kubelet[2453]: E0710 00:37:37.256996 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.257168 kubelet[2453]: E0710 00:37:37.257157 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.257168 kubelet[2453]: W0710 00:37:37.257166 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.257216 kubelet[2453]: E0710 00:37:37.257176 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.266641 kubelet[2453]: E0710 00:37:37.266619 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:37.266641 kubelet[2453]: W0710 00:37:37.266635 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:37.266737 kubelet[2453]: E0710 00:37:37.266649 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:37.857698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3190707189.mount: Deactivated successfully. Jul 10 00:37:38.759841 containerd[1431]: time="2025-07-10T00:37:38.759801837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:38.760899 containerd[1431]: time="2025-07-10T00:37:38.760291079Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 10 00:37:38.761444 containerd[1431]: time="2025-07-10T00:37:38.761323123Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:38.763452 containerd[1431]: time="2025-07-10T00:37:38.763392691Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:38.764244 containerd[1431]: time="2025-07-10T00:37:38.764119014Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.832539407s" Jul 10 00:37:38.764244 containerd[1431]: time="2025-07-10T00:37:38.764151774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 10 00:37:38.765347 containerd[1431]: time="2025-07-10T00:37:38.765018057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 10 00:37:38.779201 containerd[1431]: time="2025-07-10T00:37:38.779161312Z" level=info msg="CreateContainer within sandbox \"7df8331a35e2f244f54cf97a4e519666260e513da03541c8e855d5c5b991d295\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 10 00:37:38.792710 containerd[1431]: time="2025-07-10T00:37:38.792660043Z" level=info msg="CreateContainer within sandbox \"7df8331a35e2f244f54cf97a4e519666260e513da03541c8e855d5c5b991d295\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3705fd7a72f59bdefc8db98917f37694e46e89cd2b34d8a26521d528a4a409ae\"" Jul 10 00:37:38.793268 containerd[1431]: time="2025-07-10T00:37:38.793150725Z" level=info msg="StartContainer for \"3705fd7a72f59bdefc8db98917f37694e46e89cd2b34d8a26521d528a4a409ae\"" Jul 10 00:37:38.823565 systemd[1]: Started cri-containerd-3705fd7a72f59bdefc8db98917f37694e46e89cd2b34d8a26521d528a4a409ae.scope - libcontainer container 3705fd7a72f59bdefc8db98917f37694e46e89cd2b34d8a26521d528a4a409ae. Jul 10 00:37:38.878352 containerd[1431]: time="2025-07-10T00:37:38.878249532Z" level=info msg="StartContainer for \"3705fd7a72f59bdefc8db98917f37694e46e89cd2b34d8a26521d528a4a409ae\" returns successfully" Jul 10 00:37:38.908454 kubelet[2453]: E0710 00:37:38.908052 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qd5xx" podUID="3b36436a-7d97-4120-92b3-49bbe1e5480c" Jul 10 00:37:38.995313 kubelet[2453]: E0710 00:37:38.995274 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:39.008802 kubelet[2453]: I0710 00:37:39.008736 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-777fcf6c95-mfdwh" podStartSLOduration=1.174479177 podStartE2EDuration="3.008720551s" podCreationTimestamp="2025-07-10 00:37:36 +0000 UTC" firstStartedPulling="2025-07-10 00:37:36.930681523 +0000 UTC m=+20.115785551" lastFinishedPulling="2025-07-10 00:37:38.764922897 +0000 UTC m=+21.950026925" observedRunningTime="2025-07-10 00:37:39.00855039 +0000 UTC m=+22.193654418" watchObservedRunningTime="2025-07-10 00:37:39.008720551 +0000 UTC m=+22.193824579" Jul 10 00:37:39.049649 kubelet[2453]: E0710 00:37:39.049069 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.049649 kubelet[2453]: W0710 00:37:39.049571 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.049649 kubelet[2453]: E0710 00:37:39.049597 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.049850 kubelet[2453]: E0710 00:37:39.049829 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.049850 kubelet[2453]: W0710 00:37:39.049844 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.049911 kubelet[2453]: E0710 00:37:39.049854 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.050079 kubelet[2453]: E0710 00:37:39.050060 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.050079 kubelet[2453]: W0710 00:37:39.050072 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.050079 kubelet[2453]: E0710 00:37:39.050081 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.050539 kubelet[2453]: E0710 00:37:39.050406 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.050539 kubelet[2453]: W0710 00:37:39.050421 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.050539 kubelet[2453]: E0710 00:37:39.050443 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.051510 kubelet[2453]: E0710 00:37:39.051219 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.051510 kubelet[2453]: W0710 00:37:39.051235 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.051510 kubelet[2453]: E0710 00:37:39.051258 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.051510 kubelet[2453]: E0710 00:37:39.051489 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.051510 kubelet[2453]: W0710 00:37:39.051499 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.051510 kubelet[2453]: E0710 00:37:39.051508 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.051939 kubelet[2453]: E0710 00:37:39.051687 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.051939 kubelet[2453]: W0710 00:37:39.051697 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.051939 kubelet[2453]: E0710 00:37:39.051705 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.052660 kubelet[2453]: E0710 00:37:39.052633 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.052660 kubelet[2453]: W0710 00:37:39.052650 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.052660 kubelet[2453]: E0710 00:37:39.052662 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.053411 kubelet[2453]: E0710 00:37:39.052895 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.053411 kubelet[2453]: W0710 00:37:39.052904 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.053411 kubelet[2453]: E0710 00:37:39.052914 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.053411 kubelet[2453]: E0710 00:37:39.053105 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.053411 kubelet[2453]: W0710 00:37:39.053114 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.053411 kubelet[2453]: E0710 00:37:39.053123 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.053411 kubelet[2453]: E0710 00:37:39.053307 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.053411 kubelet[2453]: W0710 00:37:39.053316 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.053411 kubelet[2453]: E0710 00:37:39.053325 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.053665 kubelet[2453]: E0710 00:37:39.053513 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.053665 kubelet[2453]: W0710 00:37:39.053522 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.053665 kubelet[2453]: E0710 00:37:39.053531 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.053742 kubelet[2453]: E0710 00:37:39.053711 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.053742 kubelet[2453]: W0710 00:37:39.053730 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.053742 kubelet[2453]: E0710 00:37:39.053738 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.054149 kubelet[2453]: E0710 00:37:39.053893 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.054149 kubelet[2453]: W0710 00:37:39.053903 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.054149 kubelet[2453]: E0710 00:37:39.053911 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.054272 kubelet[2453]: E0710 00:37:39.054230 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.054272 kubelet[2453]: W0710 00:37:39.054241 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.054272 kubelet[2453]: E0710 00:37:39.054251 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.064675 kubelet[2453]: E0710 00:37:39.064614 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.064675 kubelet[2453]: W0710 00:37:39.064638 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.064675 kubelet[2453]: E0710 00:37:39.064671 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.064914 kubelet[2453]: E0710 00:37:39.064896 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.064914 kubelet[2453]: W0710 00:37:39.064906 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.064984 kubelet[2453]: E0710 00:37:39.064924 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.065179 kubelet[2453]: E0710 00:37:39.065153 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.065179 kubelet[2453]: W0710 00:37:39.065174 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.065343 kubelet[2453]: E0710 00:37:39.065193 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.065442 kubelet[2453]: E0710 00:37:39.065417 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.065442 kubelet[2453]: W0710 00:37:39.065428 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.065442 kubelet[2453]: E0710 00:37:39.065437 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.065690 kubelet[2453]: E0710 00:37:39.065668 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.065690 kubelet[2453]: W0710 00:37:39.065681 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.065762 kubelet[2453]: E0710 00:37:39.065696 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.066314 kubelet[2453]: E0710 00:37:39.066293 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.066314 kubelet[2453]: W0710 00:37:39.066309 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.066436 kubelet[2453]: E0710 00:37:39.066338 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.066781 kubelet[2453]: E0710 00:37:39.066755 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.066781 kubelet[2453]: W0710 00:37:39.066769 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.066781 kubelet[2453]: E0710 00:37:39.066785 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.067169 kubelet[2453]: E0710 00:37:39.067151 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.067169 kubelet[2453]: W0710 00:37:39.067166 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.067313 kubelet[2453]: E0710 00:37:39.067261 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.067675 kubelet[2453]: E0710 00:37:39.067654 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.067675 kubelet[2453]: W0710 00:37:39.067669 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.067839 kubelet[2453]: E0710 00:37:39.067802 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.068270 kubelet[2453]: E0710 00:37:39.068249 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.068270 kubelet[2453]: W0710 00:37:39.068264 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.068500 kubelet[2453]: E0710 00:37:39.068406 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.069019 kubelet[2453]: E0710 00:37:39.068993 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.069019 kubelet[2453]: W0710 00:37:39.069009 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.069116 kubelet[2453]: E0710 00:37:39.069035 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.069297 kubelet[2453]: E0710 00:37:39.069248 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.069297 kubelet[2453]: W0710 00:37:39.069265 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.070501 kubelet[2453]: E0710 00:37:39.069308 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.070501 kubelet[2453]: E0710 00:37:39.069953 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.070501 kubelet[2453]: W0710 00:37:39.069968 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.070501 kubelet[2453]: E0710 00:37:39.069986 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.070501 kubelet[2453]: E0710 00:37:39.070187 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.070501 kubelet[2453]: W0710 00:37:39.070197 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.070501 kubelet[2453]: E0710 00:37:39.070211 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.070501 kubelet[2453]: E0710 00:37:39.070434 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.070501 kubelet[2453]: W0710 00:37:39.070446 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.070501 kubelet[2453]: E0710 00:37:39.070457 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.070900 kubelet[2453]: E0710 00:37:39.070857 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.070900 kubelet[2453]: W0710 00:37:39.070869 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.070963 kubelet[2453]: E0710 00:37:39.070907 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.072463 kubelet[2453]: E0710 00:37:39.072429 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.072463 kubelet[2453]: W0710 00:37:39.072454 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.072463 kubelet[2453]: E0710 00:37:39.072469 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.073703 kubelet[2453]: E0710 00:37:39.073680 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:37:39.073703 kubelet[2453]: W0710 00:37:39.073697 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:37:39.073703 kubelet[2453]: E0710 00:37:39.073710 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:37:39.734180 containerd[1431]: time="2025-07-10T00:37:39.734133202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:39.735222 containerd[1431]: time="2025-07-10T00:37:39.735188685Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 10 00:37:39.736097 containerd[1431]: time="2025-07-10T00:37:39.736046288Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:39.737928 containerd[1431]: time="2025-07-10T00:37:39.737894895Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:39.738883 containerd[1431]: time="2025-07-10T00:37:39.738849339Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 973.801041ms" Jul 10 00:37:39.738939 containerd[1431]: time="2025-07-10T00:37:39.738888899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 10 00:37:39.741170 containerd[1431]: time="2025-07-10T00:37:39.741140667Z" level=info msg="CreateContainer within sandbox \"3be818d02a078ca52b7f05c90a8495539fa7fb86c4102f5579a82c2a172b1b36\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 10 00:37:39.762070 containerd[1431]: time="2025-07-10T00:37:39.762021422Z" level=info msg="CreateContainer within sandbox \"3be818d02a078ca52b7f05c90a8495539fa7fb86c4102f5579a82c2a172b1b36\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9f2603533a8db9f67344f33ab7044fb84cbbf36bbf2356339567d5146cdd9061\"" Jul 10 00:37:39.762575 containerd[1431]: time="2025-07-10T00:37:39.762531384Z" level=info msg="StartContainer for \"9f2603533a8db9f67344f33ab7044fb84cbbf36bbf2356339567d5146cdd9061\"" Jul 10 00:37:39.792518 systemd[1]: Started cri-containerd-9f2603533a8db9f67344f33ab7044fb84cbbf36bbf2356339567d5146cdd9061.scope - libcontainer container 9f2603533a8db9f67344f33ab7044fb84cbbf36bbf2356339567d5146cdd9061. Jul 10 00:37:39.818535 containerd[1431]: time="2025-07-10T00:37:39.817492822Z" level=info msg="StartContainer for \"9f2603533a8db9f67344f33ab7044fb84cbbf36bbf2356339567d5146cdd9061\" returns successfully" Jul 10 00:37:39.852593 systemd[1]: cri-containerd-9f2603533a8db9f67344f33ab7044fb84cbbf36bbf2356339567d5146cdd9061.scope: Deactivated successfully. Jul 10 00:37:39.888626 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f2603533a8db9f67344f33ab7044fb84cbbf36bbf2356339567d5146cdd9061-rootfs.mount: Deactivated successfully. Jul 10 00:37:39.910600 containerd[1431]: time="2025-07-10T00:37:39.908268988Z" level=info msg="shim disconnected" id=9f2603533a8db9f67344f33ab7044fb84cbbf36bbf2356339567d5146cdd9061 namespace=k8s.io Jul 10 00:37:39.910600 containerd[1431]: time="2025-07-10T00:37:39.910594237Z" level=warning msg="cleaning up after shim disconnected" id=9f2603533a8db9f67344f33ab7044fb84cbbf36bbf2356339567d5146cdd9061 namespace=k8s.io Jul 10 00:37:39.910600 containerd[1431]: time="2025-07-10T00:37:39.910607797Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:37:39.998154 kubelet[2453]: I0710 00:37:39.997820 2453 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:37:39.999607 kubelet[2453]: E0710 00:37:39.998174 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:40.000089 containerd[1431]: time="2025-07-10T00:37:39.999892198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 10 00:37:40.908904 kubelet[2453]: E0710 00:37:40.908069 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qd5xx" podUID="3b36436a-7d97-4120-92b3-49bbe1e5480c" Jul 10 00:37:42.001251 containerd[1431]: time="2025-07-10T00:37:42.000406577Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:42.001251 containerd[1431]: time="2025-07-10T00:37:42.001073819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 10 00:37:42.002007 containerd[1431]: time="2025-07-10T00:37:42.001960422Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:42.003991 containerd[1431]: time="2025-07-10T00:37:42.003954987Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:42.004910 containerd[1431]: time="2025-07-10T00:37:42.004807590Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.004876712s" Jul 10 00:37:42.004910 containerd[1431]: time="2025-07-10T00:37:42.004837150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 10 00:37:42.007293 containerd[1431]: time="2025-07-10T00:37:42.007131877Z" level=info msg="CreateContainer within sandbox \"3be818d02a078ca52b7f05c90a8495539fa7fb86c4102f5579a82c2a172b1b36\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 10 00:37:42.019546 containerd[1431]: time="2025-07-10T00:37:42.019494474Z" level=info msg="CreateContainer within sandbox \"3be818d02a078ca52b7f05c90a8495539fa7fb86c4102f5579a82c2a172b1b36\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8ba33a3265ef5362de2e138296c36e30b80534f32309759f4e556cfc3f2be8da\"" Jul 10 00:37:42.020407 containerd[1431]: time="2025-07-10T00:37:42.020247516Z" level=info msg="StartContainer for \"8ba33a3265ef5362de2e138296c36e30b80534f32309759f4e556cfc3f2be8da\"" Jul 10 00:37:42.039900 systemd[1]: run-containerd-runc-k8s.io-8ba33a3265ef5362de2e138296c36e30b80534f32309759f4e556cfc3f2be8da-runc.f0iHoW.mount: Deactivated successfully. Jul 10 00:37:42.058584 systemd[1]: Started cri-containerd-8ba33a3265ef5362de2e138296c36e30b80534f32309759f4e556cfc3f2be8da.scope - libcontainer container 8ba33a3265ef5362de2e138296c36e30b80534f32309759f4e556cfc3f2be8da. Jul 10 00:37:42.084888 containerd[1431]: time="2025-07-10T00:37:42.084838347Z" level=info msg="StartContainer for \"8ba33a3265ef5362de2e138296c36e30b80534f32309759f4e556cfc3f2be8da\" returns successfully" Jul 10 00:37:42.665051 systemd[1]: cri-containerd-8ba33a3265ef5362de2e138296c36e30b80534f32309759f4e556cfc3f2be8da.scope: Deactivated successfully. Jul 10 00:37:42.683391 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ba33a3265ef5362de2e138296c36e30b80534f32309759f4e556cfc3f2be8da-rootfs.mount: Deactivated successfully. Jul 10 00:37:42.743938 containerd[1431]: time="2025-07-10T00:37:42.743864421Z" level=info msg="shim disconnected" id=8ba33a3265ef5362de2e138296c36e30b80534f32309759f4e556cfc3f2be8da namespace=k8s.io Jul 10 00:37:42.743938 containerd[1431]: time="2025-07-10T00:37:42.743920582Z" level=warning msg="cleaning up after shim disconnected" id=8ba33a3265ef5362de2e138296c36e30b80534f32309759f4e556cfc3f2be8da namespace=k8s.io Jul 10 00:37:42.743938 containerd[1431]: time="2025-07-10T00:37:42.743929982Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:37:42.760930 kubelet[2453]: I0710 00:37:42.760885 2453 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 10 00:37:42.802808 systemd[1]: Created slice kubepods-besteffort-pod50dbd3c7_db9e_475c_b96d_679203b54cc6.slice - libcontainer container kubepods-besteffort-pod50dbd3c7_db9e_475c_b96d_679203b54cc6.slice. Jul 10 00:37:42.807925 kubelet[2453]: W0710 00:37:42.806753 2453 reflector.go:561] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Jul 10 00:37:42.815233 kubelet[2453]: E0710 00:37:42.815158 2453 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 10 00:37:42.816700 systemd[1]: Created slice kubepods-besteffort-pode7ca86f0_bcb9_4f79_9b5e_0dc27b14ae85.slice - libcontainer container kubepods-besteffort-pode7ca86f0_bcb9_4f79_9b5e_0dc27b14ae85.slice. Jul 10 00:37:42.824294 systemd[1]: Created slice kubepods-burstable-pod2d71dbf9_7620_445d_8d35_2cc9ef195ea7.slice - libcontainer container kubepods-burstable-pod2d71dbf9_7620_445d_8d35_2cc9ef195ea7.slice. Jul 10 00:37:42.832745 systemd[1]: Created slice kubepods-besteffort-pod1f658df4_5506_4d46_bb23_7f9741b9a122.slice - libcontainer container kubepods-besteffort-pod1f658df4_5506_4d46_bb23_7f9741b9a122.slice. Jul 10 00:37:42.838544 systemd[1]: Created slice kubepods-besteffort-podb35917b6_dff4_49b2_b380_4a0514f6d1e8.slice - libcontainer container kubepods-besteffort-podb35917b6_dff4_49b2_b380_4a0514f6d1e8.slice. Jul 10 00:37:42.843248 systemd[1]: Created slice kubepods-besteffort-pod6da9c462_d90c_4dbd_bc12_1f4bc6292e19.slice - libcontainer container kubepods-besteffort-pod6da9c462_d90c_4dbd_bc12_1f4bc6292e19.slice. Jul 10 00:37:42.848705 systemd[1]: Created slice kubepods-burstable-pod7c3c9fa5_4474_4544_97e7_30e66ba1f67c.slice - libcontainer container kubepods-burstable-pod7c3c9fa5_4474_4544_97e7_30e66ba1f67c.slice. Jul 10 00:37:42.894114 kubelet[2453]: I0710 00:37:42.894075 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c3c9fa5-4474-4544-97e7-30e66ba1f67c-config-volume\") pod \"coredns-7c65d6cfc9-rsqvw\" (UID: \"7c3c9fa5-4474-4544-97e7-30e66ba1f67c\") " pod="kube-system/coredns-7c65d6cfc9-rsqvw" Jul 10 00:37:42.894114 kubelet[2453]: I0710 00:37:42.894117 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6da9c462-d90c-4dbd-bc12-1f4bc6292e19-whisker-backend-key-pair\") pod \"whisker-5b6dd46dd7-xjrpt\" (UID: \"6da9c462-d90c-4dbd-bc12-1f4bc6292e19\") " pod="calico-system/whisker-5b6dd46dd7-xjrpt" Jul 10 00:37:42.894293 kubelet[2453]: I0710 00:37:42.894137 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e7ca86f0-bcb9-4f79-9b5e-0dc27b14ae85-calico-apiserver-certs\") pod \"calico-apiserver-9b5f696fd-6xg9b\" (UID: \"e7ca86f0-bcb9-4f79-9b5e-0dc27b14ae85\") " pod="calico-apiserver/calico-apiserver-9b5f696fd-6xg9b" Jul 10 00:37:42.894293 kubelet[2453]: I0710 00:37:42.894159 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b35917b6-dff4-49b2-b380-4a0514f6d1e8-config\") pod \"goldmane-58fd7646b9-sqpvz\" (UID: \"b35917b6-dff4-49b2-b380-4a0514f6d1e8\") " pod="calico-system/goldmane-58fd7646b9-sqpvz" Jul 10 00:37:42.894293 kubelet[2453]: I0710 00:37:42.894176 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b35917b6-dff4-49b2-b380-4a0514f6d1e8-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-sqpvz\" (UID: \"b35917b6-dff4-49b2-b380-4a0514f6d1e8\") " pod="calico-system/goldmane-58fd7646b9-sqpvz" Jul 10 00:37:42.894293 kubelet[2453]: I0710 00:37:42.894191 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50dbd3c7-db9e-475c-b96d-679203b54cc6-tigera-ca-bundle\") pod \"calico-kube-controllers-55cdd86887-d59d2\" (UID: \"50dbd3c7-db9e-475c-b96d-679203b54cc6\") " pod="calico-system/calico-kube-controllers-55cdd86887-d59d2" Jul 10 00:37:42.894293 kubelet[2453]: I0710 00:37:42.894208 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkcpm\" (UniqueName: \"kubernetes.io/projected/1f658df4-5506-4d46-bb23-7f9741b9a122-kube-api-access-zkcpm\") pod \"calico-apiserver-9b5f696fd-429ml\" (UID: \"1f658df4-5506-4d46-bb23-7f9741b9a122\") " pod="calico-apiserver/calico-apiserver-9b5f696fd-429ml" Jul 10 00:37:42.894451 kubelet[2453]: I0710 00:37:42.894226 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d71dbf9-7620-445d-8d35-2cc9ef195ea7-config-volume\") pod \"coredns-7c65d6cfc9-qzjj6\" (UID: \"2d71dbf9-7620-445d-8d35-2cc9ef195ea7\") " pod="kube-system/coredns-7c65d6cfc9-qzjj6" Jul 10 00:37:42.894451 kubelet[2453]: I0710 00:37:42.894249 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1f658df4-5506-4d46-bb23-7f9741b9a122-calico-apiserver-certs\") pod \"calico-apiserver-9b5f696fd-429ml\" (UID: \"1f658df4-5506-4d46-bb23-7f9741b9a122\") " pod="calico-apiserver/calico-apiserver-9b5f696fd-429ml" Jul 10 00:37:42.894451 kubelet[2453]: I0710 00:37:42.894266 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b35917b6-dff4-49b2-b380-4a0514f6d1e8-goldmane-key-pair\") pod \"goldmane-58fd7646b9-sqpvz\" (UID: \"b35917b6-dff4-49b2-b380-4a0514f6d1e8\") " pod="calico-system/goldmane-58fd7646b9-sqpvz" Jul 10 00:37:42.894451 kubelet[2453]: I0710 00:37:42.894296 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2zsw\" (UniqueName: \"kubernetes.io/projected/7c3c9fa5-4474-4544-97e7-30e66ba1f67c-kube-api-access-k2zsw\") pod \"coredns-7c65d6cfc9-rsqvw\" (UID: \"7c3c9fa5-4474-4544-97e7-30e66ba1f67c\") " pod="kube-system/coredns-7c65d6cfc9-rsqvw" Jul 10 00:37:42.894451 kubelet[2453]: I0710 00:37:42.894313 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nscb4\" (UniqueName: \"kubernetes.io/projected/b35917b6-dff4-49b2-b380-4a0514f6d1e8-kube-api-access-nscb4\") pod \"goldmane-58fd7646b9-sqpvz\" (UID: \"b35917b6-dff4-49b2-b380-4a0514f6d1e8\") " pod="calico-system/goldmane-58fd7646b9-sqpvz" Jul 10 00:37:42.894562 kubelet[2453]: I0710 00:37:42.894328 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6da9c462-d90c-4dbd-bc12-1f4bc6292e19-whisker-ca-bundle\") pod \"whisker-5b6dd46dd7-xjrpt\" (UID: \"6da9c462-d90c-4dbd-bc12-1f4bc6292e19\") " pod="calico-system/whisker-5b6dd46dd7-xjrpt" Jul 10 00:37:42.894562 kubelet[2453]: I0710 00:37:42.894344 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2sjp\" (UniqueName: \"kubernetes.io/projected/e7ca86f0-bcb9-4f79-9b5e-0dc27b14ae85-kube-api-access-h2sjp\") pod \"calico-apiserver-9b5f696fd-6xg9b\" (UID: \"e7ca86f0-bcb9-4f79-9b5e-0dc27b14ae85\") " pod="calico-apiserver/calico-apiserver-9b5f696fd-6xg9b" Jul 10 00:37:42.894562 kubelet[2453]: I0710 00:37:42.894391 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvn8w\" (UniqueName: \"kubernetes.io/projected/6da9c462-d90c-4dbd-bc12-1f4bc6292e19-kube-api-access-fvn8w\") pod \"whisker-5b6dd46dd7-xjrpt\" (UID: \"6da9c462-d90c-4dbd-bc12-1f4bc6292e19\") " pod="calico-system/whisker-5b6dd46dd7-xjrpt" Jul 10 00:37:42.894562 kubelet[2453]: I0710 00:37:42.894408 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77ljs\" (UniqueName: \"kubernetes.io/projected/50dbd3c7-db9e-475c-b96d-679203b54cc6-kube-api-access-77ljs\") pod \"calico-kube-controllers-55cdd86887-d59d2\" (UID: \"50dbd3c7-db9e-475c-b96d-679203b54cc6\") " pod="calico-system/calico-kube-controllers-55cdd86887-d59d2" Jul 10 00:37:42.894562 kubelet[2453]: I0710 00:37:42.894426 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8dxs\" (UniqueName: \"kubernetes.io/projected/2d71dbf9-7620-445d-8d35-2cc9ef195ea7-kube-api-access-b8dxs\") pod \"coredns-7c65d6cfc9-qzjj6\" (UID: \"2d71dbf9-7620-445d-8d35-2cc9ef195ea7\") " pod="kube-system/coredns-7c65d6cfc9-qzjj6" Jul 10 00:37:42.912410 systemd[1]: Created slice kubepods-besteffort-pod3b36436a_7d97_4120_92b3_49bbe1e5480c.slice - libcontainer container kubepods-besteffort-pod3b36436a_7d97_4120_92b3_49bbe1e5480c.slice. Jul 10 00:37:42.914964 containerd[1431]: time="2025-07-10T00:37:42.914927009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qd5xx,Uid:3b36436a-7d97-4120-92b3-49bbe1e5480c,Namespace:calico-system,Attempt:0,}" Jul 10 00:37:43.057864 containerd[1431]: time="2025-07-10T00:37:43.057796422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 10 00:37:43.111199 containerd[1431]: time="2025-07-10T00:37:43.110859209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55cdd86887-d59d2,Uid:50dbd3c7-db9e-475c-b96d-679203b54cc6,Namespace:calico-system,Attempt:0,}" Jul 10 00:37:43.121529 containerd[1431]: time="2025-07-10T00:37:43.121483079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b5f696fd-6xg9b,Uid:e7ca86f0-bcb9-4f79-9b5e-0dc27b14ae85,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:37:43.129820 kubelet[2453]: E0710 00:37:43.129525 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:43.134710 containerd[1431]: time="2025-07-10T00:37:43.134665636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qzjj6,Uid:2d71dbf9-7620-445d-8d35-2cc9ef195ea7,Namespace:kube-system,Attempt:0,}" Jul 10 00:37:43.136693 containerd[1431]: time="2025-07-10T00:37:43.136577601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b5f696fd-429ml,Uid:1f658df4-5506-4d46-bb23-7f9741b9a122,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:37:43.143535 containerd[1431]: time="2025-07-10T00:37:43.143144859Z" level=error msg="Failed to destroy network for sandbox \"89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:43.144590 containerd[1431]: time="2025-07-10T00:37:43.143781621Z" level=error msg="encountered an error cleaning up failed sandbox \"89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:43.144590 containerd[1431]: time="2025-07-10T00:37:43.143832461Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qd5xx,Uid:3b36436a-7d97-4120-92b3-49bbe1e5480c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:43.147194 kubelet[2453]: E0710 00:37:43.147092 2453 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:43.148058 containerd[1431]: time="2025-07-10T00:37:43.148001633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b6dd46dd7-xjrpt,Uid:6da9c462-d90c-4dbd-bc12-1f4bc6292e19,Namespace:calico-system,Attempt:0,}" Jul 10 00:37:43.149406 kubelet[2453]: E0710 00:37:43.149325 2453 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qd5xx" Jul 10 00:37:43.149513 kubelet[2453]: E0710 00:37:43.149419 2453 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qd5xx" Jul 10 00:37:43.149554 kubelet[2453]: E0710 00:37:43.149509 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qd5xx_calico-system(3b36436a-7d97-4120-92b3-49bbe1e5480c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qd5xx_calico-system(3b36436a-7d97-4120-92b3-49bbe1e5480c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qd5xx" podUID="3b36436a-7d97-4120-92b3-49bbe1e5480c" Jul 10 00:37:43.151667 kubelet[2453]: E0710 00:37:43.151531 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:43.154161 containerd[1431]: time="2025-07-10T00:37:43.153898969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rsqvw,Uid:7c3c9fa5-4474-4544-97e7-30e66ba1f67c,Namespace:kube-system,Attempt:0,}" Jul 10 00:37:43.181963 containerd[1431]: time="2025-07-10T00:37:43.181889167Z" level=error msg="Failed to destroy network for sandbox \"67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:43.182772 containerd[1431]: time="2025-07-10T00:37:43.182724409Z" level=error msg="encountered an error cleaning up failed sandbox \"67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:43.182851 containerd[1431]: time="2025-07-10T00:37:43.182781249Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55cdd86887-d59d2,Uid:50dbd3c7-db9e-475c-b96d-679203b54cc6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:43.183016 kubelet[2453]: E0710 00:37:43.182970 2453 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:43.183071 kubelet[2453]: E0710 00:37:43.183029 2453 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55cdd86887-d59d2" Jul 10 00:37:43.183071 kubelet[2453]: E0710 00:37:43.183048 2453 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55cdd86887-d59d2" Jul 10 00:37:43.183125 kubelet[2453]: E0710 00:37:43.183088 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55cdd86887-d59d2_calico-system(50dbd3c7-db9e-475c-b96d-679203b54cc6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55cdd86887-d59d2_calico-system(50dbd3c7-db9e-475c-b96d-679203b54cc6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55cdd86887-d59d2" podUID="50dbd3c7-db9e-475c-b96d-679203b54cc6" Jul 10 00:37:43.219224 containerd[1431]: time="2025-07-10T00:37:43.219149950Z" level=error msg="Failed to destroy network for sandbox \"0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:43.220176 containerd[1431]: time="2025-07-10T00:37:43.220022753Z" level=error msg="encountered an error cleaning up failed sandbox \"0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:43.220176 containerd[1431]: time="2025-07-10T00:37:43.220077233Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b5f696fd-6xg9b,Uid:e7ca86f0-bcb9-4f79-9b5e-0dc27b14ae85,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:43.220385 kubelet[2453]: E0710 00:37:43.220318 2453 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:43.220447 kubelet[2453]: E0710 00:37:43.220402 2453 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9b5f696fd-6xg9b" Jul 10 00:37:43.220447 kubelet[2453]: E0710 00:37:43.220421 2453 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9b5f696fd-6xg9b" Jul 10 00:37:43.220496 kubelet[2453]: E0710 00:37:43.220474 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9b5f696fd-6xg9b_calico-apiserver(e7ca86f0-bcb9-4f79-9b5e-0dc27b14ae85)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9b5f696fd-6xg9b_calico-apiserver(e7ca86f0-bcb9-4f79-9b5e-0dc27b14ae85)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9b5f696fd-6xg9b" podUID="e7ca86f0-bcb9-4f79-9b5e-0dc27b14ae85" Jul 10 00:37:43.235524 containerd[1431]: time="2025-07-10T00:37:43.235471596Z" level=error msg="Failed to destroy network for sandbox \"4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:43.235893 containerd[1431]: time="2025-07-10T00:37:43.235866637Z" level=error msg="encountered an error cleaning up failed sandbox \"4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:43.235937 containerd[1431]: time="2025-07-10T00:37:43.235917757Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b5f696fd-429ml,Uid:1f658df4-5506-4d46-bb23-7f9741b9a122,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:43.236141 kubelet[2453]: E0710 00:37:43.236106 2453 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:43.236192 kubelet[2453]: E0710 00:37:43.236164 2453 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9b5f696fd-429ml" Jul 10 00:37:43.236192 kubelet[2453]: E0710 00:37:43.236183 2453 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9b5f696fd-429ml" Jul 10 00:37:43.236258 kubelet[2453]: E0710 00:37:43.236219 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9b5f696fd-429ml_calico-apiserver(1f658df4-5506-4d46-bb23-7f9741b9a122)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9b5f696fd-429ml_calico-apiserver(1f658df4-5506-4d46-bb23-7f9741b9a122)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9b5f696fd-429ml" podUID="1f658df4-5506-4d46-bb23-7f9741b9a122" Jul 10 00:37:43.247644 containerd[1431]: time="2025-07-10T00:37:43.247570989Z" level=error msg="Failed to destroy network for sandbox \"f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:43.249393 containerd[1431]: time="2025-07-10T00:37:43.248025191Z" level=error msg="encountered an error cleaning up failed sandbox \"f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:43.249393 containerd[1431]: time="2025-07-10T00:37:43.248077471Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qzjj6,Uid:2d71dbf9-7620-445d-8d35-2cc9ef195ea7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:43.250466 kubelet[2453]: E0710 00:37:43.248283 2453 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:43.250466 kubelet[2453]: E0710 00:37:43.248345 2453 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-qzjj6" Jul 10 00:37:43.250466 kubelet[2453]: E0710 00:37:43.248383 2453 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-qzjj6" Jul 10 00:37:43.250596 containerd[1431]: time="2025-07-10T00:37:43.249537315Z" level=error msg="Failed to destroy network for sandbox \"a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:43.250596 containerd[1431]: time="2025-07-10T00:37:43.249807996Z" level=error msg="encountered an error cleaning up failed sandbox \"a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:43.250596 containerd[1431]: time="2025-07-10T00:37:43.249852476Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rsqvw,Uid:7c3c9fa5-4474-4544-97e7-30e66ba1f67c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:43.250717 kubelet[2453]: E0710 00:37:43.248429 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-qzjj6_kube-system(2d71dbf9-7620-445d-8d35-2cc9ef195ea7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-qzjj6_kube-system(2d71dbf9-7620-445d-8d35-2cc9ef195ea7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-qzjj6" podUID="2d71dbf9-7620-445d-8d35-2cc9ef195ea7" Jul 10 00:37:43.251220 kubelet[2453]: E0710 00:37:43.251190 2453 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:43.252115 kubelet[2453]: E0710 00:37:43.251477 2453 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rsqvw" Jul 10 00:37:43.252115 kubelet[2453]: E0710 00:37:43.251502 2453 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rsqvw" Jul 10 00:37:43.252281 kubelet[2453]: E0710 00:37:43.252090 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-rsqvw_kube-system(7c3c9fa5-4474-4544-97e7-30e66ba1f67c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-rsqvw_kube-system(7c3c9fa5-4474-4544-97e7-30e66ba1f67c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-rsqvw" podUID="7c3c9fa5-4474-4544-97e7-30e66ba1f67c" Jul 10 00:37:43.258250 containerd[1431]: time="2025-07-10T00:37:43.258211179Z" level=error msg="Failed to destroy network for sandbox \"b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:43.259658 containerd[1431]: time="2025-07-10T00:37:43.259505303Z" level=error msg="encountered an error cleaning up failed sandbox \"b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:43.259658 containerd[1431]: time="2025-07-10T00:37:43.259559423Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b6dd46dd7-xjrpt,Uid:6da9c462-d90c-4dbd-bc12-1f4bc6292e19,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:43.260207 kubelet[2453]: E0710 00:37:43.259878 2453 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:43.260207 kubelet[2453]: E0710 00:37:43.259922 2453 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5b6dd46dd7-xjrpt" Jul 10 00:37:43.260207 kubelet[2453]: E0710 00:37:43.259939 2453 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5b6dd46dd7-xjrpt" Jul 10 00:37:43.260422 kubelet[2453]: E0710 00:37:43.259974 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5b6dd46dd7-xjrpt_calico-system(6da9c462-d90c-4dbd-bc12-1f4bc6292e19)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5b6dd46dd7-xjrpt_calico-system(6da9c462-d90c-4dbd-bc12-1f4bc6292e19)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5b6dd46dd7-xjrpt" podUID="6da9c462-d90c-4dbd-bc12-1f4bc6292e19" Jul 10 00:37:44.011670 kubelet[2453]: E0710 00:37:44.010313 2453 configmap.go:193] Couldn't get configMap calico-system/goldmane-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 00:37:44.013311 kubelet[2453]: E0710 00:37:44.013279 2453 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b35917b6-dff4-49b2-b380-4a0514f6d1e8-goldmane-ca-bundle podName:b35917b6-dff4-49b2-b380-4a0514f6d1e8 nodeName:}" failed. No retries permitted until 2025-07-10 00:37:44.513241276 +0000 UTC m=+27.698345264 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-ca-bundle" (UniqueName: "kubernetes.io/configmap/b35917b6-dff4-49b2-b380-4a0514f6d1e8-goldmane-ca-bundle") pod "goldmane-58fd7646b9-sqpvz" (UID: "b35917b6-dff4-49b2-b380-4a0514f6d1e8") : failed to sync configmap cache: timed out waiting for the condition Jul 10 00:37:44.023255 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0-shm.mount: Deactivated successfully. Jul 10 00:37:44.023651 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df-shm.mount: Deactivated successfully. Jul 10 00:37:44.067579 kubelet[2453]: I0710 00:37:44.067536 2453 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" Jul 10 00:37:44.071793 kubelet[2453]: I0710 00:37:44.071749 2453 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" Jul 10 00:37:44.074324 containerd[1431]: time="2025-07-10T00:37:44.074286635Z" level=info msg="StopPodSandbox for \"f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95\"" Jul 10 00:37:44.075326 containerd[1431]: time="2025-07-10T00:37:44.075044717Z" level=info msg="Ensure that sandbox f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95 in task-service has been cleanup successfully" Jul 10 00:37:44.075737 containerd[1431]: time="2025-07-10T00:37:44.074877036Z" level=info msg="StopPodSandbox for \"67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0\"" Jul 10 00:37:44.075737 containerd[1431]: time="2025-07-10T00:37:44.075569998Z" level=info msg="Ensure that sandbox 67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0 in task-service has been cleanup successfully" Jul 10 00:37:44.076546 kubelet[2453]: I0710 00:37:44.076508 2453 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" Jul 10 00:37:44.078793 containerd[1431]: time="2025-07-10T00:37:44.078757007Z" level=info msg="StopPodSandbox for \"a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25\"" Jul 10 00:37:44.079210 containerd[1431]: time="2025-07-10T00:37:44.079177328Z" level=info msg="Ensure that sandbox a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25 in task-service has been cleanup successfully" Jul 10 00:37:44.080302 kubelet[2453]: I0710 00:37:44.079803 2453 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" Jul 10 00:37:44.083606 containerd[1431]: time="2025-07-10T00:37:44.083520779Z" level=info msg="StopPodSandbox for \"b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a\"" Jul 10 00:37:44.083680 containerd[1431]: time="2025-07-10T00:37:44.083669379Z" level=info msg="Ensure that sandbox b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a in task-service has been cleanup successfully" Jul 10 00:37:44.102371 kubelet[2453]: I0710 00:37:44.102326 2453 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" Jul 10 00:37:44.103034 containerd[1431]: time="2025-07-10T00:37:44.103005230Z" level=info msg="StopPodSandbox for \"89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df\"" Jul 10 00:37:44.103455 containerd[1431]: time="2025-07-10T00:37:44.103431551Z" level=info msg="Ensure that sandbox 89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df in task-service has been cleanup successfully" Jul 10 00:37:44.105874 kubelet[2453]: I0710 00:37:44.105843 2453 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" Jul 10 00:37:44.106349 containerd[1431]: time="2025-07-10T00:37:44.106318798Z" level=info msg="StopPodSandbox for \"4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6\"" Jul 10 00:37:44.106635 containerd[1431]: time="2025-07-10T00:37:44.106610559Z" level=info msg="Ensure that sandbox 4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6 in task-service has been cleanup successfully" Jul 10 00:37:44.114325 kubelet[2453]: I0710 00:37:44.114289 2453 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" Jul 10 00:37:44.115918 containerd[1431]: time="2025-07-10T00:37:44.115881023Z" level=info msg="StopPodSandbox for \"0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b\"" Jul 10 00:37:44.117149 containerd[1431]: time="2025-07-10T00:37:44.117114066Z" level=info msg="Ensure that sandbox 0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b in task-service has been cleanup successfully" Jul 10 00:37:44.155122 containerd[1431]: time="2025-07-10T00:37:44.155055885Z" level=error msg="StopPodSandbox for \"a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25\" failed" error="failed to destroy network for sandbox \"a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:44.156345 kubelet[2453]: E0710 00:37:44.155403 2453 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" Jul 10 00:37:44.156345 kubelet[2453]: E0710 00:37:44.156280 2453 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25"} Jul 10 00:37:44.156345 kubelet[2453]: E0710 00:37:44.156339 2453 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7c3c9fa5-4474-4544-97e7-30e66ba1f67c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:37:44.156599 containerd[1431]: time="2025-07-10T00:37:44.156244048Z" level=error msg="StopPodSandbox for \"67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0\" failed" error="failed to destroy network for sandbox \"67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:44.156636 kubelet[2453]: E0710 00:37:44.156380 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7c3c9fa5-4474-4544-97e7-30e66ba1f67c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-rsqvw" podUID="7c3c9fa5-4474-4544-97e7-30e66ba1f67c" Jul 10 00:37:44.156636 kubelet[2453]: E0710 00:37:44.156426 2453 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" Jul 10 00:37:44.156636 kubelet[2453]: E0710 00:37:44.156441 2453 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0"} Jul 10 00:37:44.156636 kubelet[2453]: E0710 00:37:44.156460 2453 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"50dbd3c7-db9e-475c-b96d-679203b54cc6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:37:44.156757 kubelet[2453]: E0710 00:37:44.156474 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"50dbd3c7-db9e-475c-b96d-679203b54cc6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55cdd86887-d59d2" podUID="50dbd3c7-db9e-475c-b96d-679203b54cc6" Jul 10 00:37:44.163797 containerd[1431]: time="2025-07-10T00:37:44.163519747Z" level=error msg="StopPodSandbox for \"4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6\" failed" error="failed to destroy network for sandbox \"4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:44.163896 kubelet[2453]: E0710 00:37:44.163724 2453 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" Jul 10 00:37:44.163896 kubelet[2453]: E0710 00:37:44.163767 2453 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6"} Jul 10 00:37:44.163896 kubelet[2453]: E0710 00:37:44.163811 2453 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1f658df4-5506-4d46-bb23-7f9741b9a122\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:37:44.163896 kubelet[2453]: E0710 00:37:44.163832 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1f658df4-5506-4d46-bb23-7f9741b9a122\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9b5f696fd-429ml" podUID="1f658df4-5506-4d46-bb23-7f9741b9a122" Jul 10 00:37:44.164426 containerd[1431]: time="2025-07-10T00:37:44.164396030Z" level=error msg="StopPodSandbox for \"89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df\" failed" error="failed to destroy network for sandbox \"89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:44.164841 kubelet[2453]: E0710 00:37:44.164785 2453 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" Jul 10 00:37:44.164906 kubelet[2453]: E0710 00:37:44.164844 2453 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df"} Jul 10 00:37:44.164906 kubelet[2453]: E0710 00:37:44.164870 2453 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3b36436a-7d97-4120-92b3-49bbe1e5480c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:37:44.164906 kubelet[2453]: E0710 00:37:44.164892 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3b36436a-7d97-4120-92b3-49bbe1e5480c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qd5xx" podUID="3b36436a-7d97-4120-92b3-49bbe1e5480c" Jul 10 00:37:44.169434 containerd[1431]: time="2025-07-10T00:37:44.169086682Z" level=error msg="StopPodSandbox for \"f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95\" failed" error="failed to destroy network for sandbox \"f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:44.169535 kubelet[2453]: E0710 00:37:44.169499 2453 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" Jul 10 00:37:44.169601 kubelet[2453]: E0710 00:37:44.169542 2453 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95"} Jul 10 00:37:44.169601 kubelet[2453]: E0710 00:37:44.169570 2453 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2d71dbf9-7620-445d-8d35-2cc9ef195ea7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:37:44.169601 kubelet[2453]: E0710 00:37:44.169589 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2d71dbf9-7620-445d-8d35-2cc9ef195ea7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-qzjj6" podUID="2d71dbf9-7620-445d-8d35-2cc9ef195ea7" Jul 10 00:37:44.176610 containerd[1431]: time="2025-07-10T00:37:44.176563541Z" level=error msg="StopPodSandbox for \"b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a\" failed" error="failed to destroy network for sandbox \"b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:44.176833 kubelet[2453]: E0710 00:37:44.176783 2453 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" Jul 10 00:37:44.176888 kubelet[2453]: E0710 00:37:44.176838 2453 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a"} Jul 10 00:37:44.176888 kubelet[2453]: E0710 00:37:44.176871 2453 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6da9c462-d90c-4dbd-bc12-1f4bc6292e19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:37:44.176960 kubelet[2453]: E0710 00:37:44.176894 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6da9c462-d90c-4dbd-bc12-1f4bc6292e19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5b6dd46dd7-xjrpt" podUID="6da9c462-d90c-4dbd-bc12-1f4bc6292e19" Jul 10 00:37:44.180676 containerd[1431]: time="2025-07-10T00:37:44.179293669Z" level=error msg="StopPodSandbox for \"0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b\" failed" error="failed to destroy network for sandbox \"0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:44.180775 kubelet[2453]: E0710 00:37:44.179505 2453 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" Jul 10 00:37:44.180775 kubelet[2453]: E0710 00:37:44.179545 2453 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b"} Jul 10 00:37:44.180775 kubelet[2453]: E0710 00:37:44.179572 2453 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e7ca86f0-bcb9-4f79-9b5e-0dc27b14ae85\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:37:44.180775 kubelet[2453]: E0710 00:37:44.179593 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e7ca86f0-bcb9-4f79-9b5e-0dc27b14ae85\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9b5f696fd-6xg9b" podUID="e7ca86f0-bcb9-4f79-9b5e-0dc27b14ae85" Jul 10 00:37:44.641227 containerd[1431]: time="2025-07-10T00:37:44.641186592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-sqpvz,Uid:b35917b6-dff4-49b2-b380-4a0514f6d1e8,Namespace:calico-system,Attempt:0,}" Jul 10 00:37:44.708679 containerd[1431]: time="2025-07-10T00:37:44.708620008Z" level=error msg="Failed to destroy network for sandbox \"10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:44.709193 containerd[1431]: time="2025-07-10T00:37:44.709139089Z" level=error msg="encountered an error cleaning up failed sandbox \"10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:44.709238 containerd[1431]: time="2025-07-10T00:37:44.709194890Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-sqpvz,Uid:b35917b6-dff4-49b2-b380-4a0514f6d1e8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:44.709485 kubelet[2453]: E0710 00:37:44.709446 2453 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:44.710215 kubelet[2453]: E0710 00:37:44.709505 2453 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-sqpvz" Jul 10 00:37:44.710215 kubelet[2453]: E0710 00:37:44.709523 2453 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-sqpvz" Jul 10 00:37:44.710215 kubelet[2453]: E0710 00:37:44.709562 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-sqpvz_calico-system(b35917b6-dff4-49b2-b380-4a0514f6d1e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-sqpvz_calico-system(b35917b6-dff4-49b2-b380-4a0514f6d1e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-sqpvz" podUID="b35917b6-dff4-49b2-b380-4a0514f6d1e8" Jul 10 00:37:44.712087 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3-shm.mount: Deactivated successfully. Jul 10 00:37:45.116800 kubelet[2453]: I0710 00:37:45.116774 2453 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" Jul 10 00:37:45.117307 containerd[1431]: time="2025-07-10T00:37:45.117278254Z" level=info msg="StopPodSandbox for \"10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3\"" Jul 10 00:37:45.117713 containerd[1431]: time="2025-07-10T00:37:45.117696495Z" level=info msg="Ensure that sandbox 10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3 in task-service has been cleanup successfully" Jul 10 00:37:45.154743 containerd[1431]: time="2025-07-10T00:37:45.154644945Z" level=error msg="StopPodSandbox for \"10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3\" failed" error="failed to destroy network for sandbox \"10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:37:45.155112 kubelet[2453]: E0710 00:37:45.155061 2453 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" Jul 10 00:37:45.155167 kubelet[2453]: E0710 00:37:45.155120 2453 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3"} Jul 10 00:37:45.155167 kubelet[2453]: E0710 00:37:45.155153 2453 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b35917b6-dff4-49b2-b380-4a0514f6d1e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:37:45.155248 kubelet[2453]: E0710 00:37:45.155178 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b35917b6-dff4-49b2-b380-4a0514f6d1e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-sqpvz" podUID="b35917b6-dff4-49b2-b380-4a0514f6d1e8" Jul 10 00:37:46.337287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount510393846.mount: Deactivated successfully. Jul 10 00:37:46.560714 containerd[1431]: time="2025-07-10T00:37:46.560640375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 10 00:37:46.579805 containerd[1431]: time="2025-07-10T00:37:46.579745179Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:46.581779 containerd[1431]: time="2025-07-10T00:37:46.581722584Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:46.583327 containerd[1431]: time="2025-07-10T00:37:46.583095027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:46.584022 containerd[1431]: time="2025-07-10T00:37:46.583560508Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 3.525697326s" Jul 10 00:37:46.584022 containerd[1431]: time="2025-07-10T00:37:46.583600948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 10 00:37:46.593514 containerd[1431]: time="2025-07-10T00:37:46.593417410Z" level=info msg="CreateContainer within sandbox \"3be818d02a078ca52b7f05c90a8495539fa7fb86c4102f5579a82c2a172b1b36\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 10 00:37:46.631688 containerd[1431]: time="2025-07-10T00:37:46.631640178Z" level=info msg="CreateContainer within sandbox \"3be818d02a078ca52b7f05c90a8495539fa7fb86c4102f5579a82c2a172b1b36\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"50b4792a48036300188299cee9b38ab30ed6e17121c9973bc4eddb6a2e35f62d\"" Jul 10 00:37:46.632593 containerd[1431]: time="2025-07-10T00:37:46.632476700Z" level=info msg="StartContainer for \"50b4792a48036300188299cee9b38ab30ed6e17121c9973bc4eddb6a2e35f62d\"" Jul 10 00:37:46.691555 systemd[1]: Started cri-containerd-50b4792a48036300188299cee9b38ab30ed6e17121c9973bc4eddb6a2e35f62d.scope - libcontainer container 50b4792a48036300188299cee9b38ab30ed6e17121c9973bc4eddb6a2e35f62d. Jul 10 00:37:46.720595 containerd[1431]: time="2025-07-10T00:37:46.720546102Z" level=info msg="StartContainer for \"50b4792a48036300188299cee9b38ab30ed6e17121c9973bc4eddb6a2e35f62d\" returns successfully" Jul 10 00:37:47.012617 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 10 00:37:47.012749 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 10 00:37:47.157612 kubelet[2453]: I0710 00:37:47.157542 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xjwnj" podStartSLOduration=1.8227316980000001 podStartE2EDuration="11.15752444s" podCreationTimestamp="2025-07-10 00:37:36 +0000 UTC" firstStartedPulling="2025-07-10 00:37:37.249612128 +0000 UTC m=+20.434716156" lastFinishedPulling="2025-07-10 00:37:46.58440487 +0000 UTC m=+29.769508898" observedRunningTime="2025-07-10 00:37:47.157072599 +0000 UTC m=+30.342176627" watchObservedRunningTime="2025-07-10 00:37:47.15752444 +0000 UTC m=+30.342628468" Jul 10 00:37:47.170985 containerd[1431]: time="2025-07-10T00:37:47.170940629Z" level=info msg="StopPodSandbox for \"b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a\"" Jul 10 00:37:47.453630 containerd[1431]: 2025-07-10 00:37:47.287 [INFO][3755] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" Jul 10 00:37:47.453630 containerd[1431]: 2025-07-10 00:37:47.290 [INFO][3755] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" iface="eth0" netns="/var/run/netns/cni-f3520de7-6788-c969-8041-d2ec7c2a9355" Jul 10 00:37:47.453630 containerd[1431]: 2025-07-10 00:37:47.290 [INFO][3755] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" iface="eth0" netns="/var/run/netns/cni-f3520de7-6788-c969-8041-d2ec7c2a9355" Jul 10 00:37:47.453630 containerd[1431]: 2025-07-10 00:37:47.293 [INFO][3755] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" iface="eth0" netns="/var/run/netns/cni-f3520de7-6788-c969-8041-d2ec7c2a9355" Jul 10 00:37:47.453630 containerd[1431]: 2025-07-10 00:37:47.293 [INFO][3755] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" Jul 10 00:37:47.453630 containerd[1431]: 2025-07-10 00:37:47.293 [INFO][3755] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" Jul 10 00:37:47.453630 containerd[1431]: 2025-07-10 00:37:47.437 [INFO][3788] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" HandleID="k8s-pod-network.b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" Workload="localhost-k8s-whisker--5b6dd46dd7--xjrpt-eth0" Jul 10 00:37:47.453630 containerd[1431]: 2025-07-10 00:37:47.437 [INFO][3788] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:37:47.453630 containerd[1431]: 2025-07-10 00:37:47.437 [INFO][3788] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:37:47.453630 containerd[1431]: 2025-07-10 00:37:47.448 [WARNING][3788] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" HandleID="k8s-pod-network.b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" Workload="localhost-k8s-whisker--5b6dd46dd7--xjrpt-eth0" Jul 10 00:37:47.453630 containerd[1431]: 2025-07-10 00:37:47.448 [INFO][3788] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" HandleID="k8s-pod-network.b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" Workload="localhost-k8s-whisker--5b6dd46dd7--xjrpt-eth0" Jul 10 00:37:47.453630 containerd[1431]: 2025-07-10 00:37:47.450 [INFO][3788] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:37:47.453630 containerd[1431]: 2025-07-10 00:37:47.451 [INFO][3755] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" Jul 10 00:37:47.456388 containerd[1431]: time="2025-07-10T00:37:47.454434358Z" level=info msg="TearDown network for sandbox \"b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a\" successfully" Jul 10 00:37:47.456388 containerd[1431]: time="2025-07-10T00:37:47.454466118Z" level=info msg="StopPodSandbox for \"b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a\" returns successfully" Jul 10 00:37:47.459778 systemd[1]: run-netns-cni\x2df3520de7\x2d6788\x2dc969\x2d8041\x2dd2ec7c2a9355.mount: Deactivated successfully. Jul 10 00:37:47.528498 kubelet[2453]: I0710 00:37:47.528451 2453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6da9c462-d90c-4dbd-bc12-1f4bc6292e19-whisker-backend-key-pair\") pod \"6da9c462-d90c-4dbd-bc12-1f4bc6292e19\" (UID: \"6da9c462-d90c-4dbd-bc12-1f4bc6292e19\") " Jul 10 00:37:47.528498 kubelet[2453]: I0710 00:37:47.528501 2453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvn8w\" (UniqueName: \"kubernetes.io/projected/6da9c462-d90c-4dbd-bc12-1f4bc6292e19-kube-api-access-fvn8w\") pod \"6da9c462-d90c-4dbd-bc12-1f4bc6292e19\" (UID: \"6da9c462-d90c-4dbd-bc12-1f4bc6292e19\") " Jul 10 00:37:47.528663 kubelet[2453]: I0710 00:37:47.528533 2453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6da9c462-d90c-4dbd-bc12-1f4bc6292e19-whisker-ca-bundle\") pod \"6da9c462-d90c-4dbd-bc12-1f4bc6292e19\" (UID: \"6da9c462-d90c-4dbd-bc12-1f4bc6292e19\") " Jul 10 00:37:47.537017 kubelet[2453]: I0710 00:37:47.535162 2453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6da9c462-d90c-4dbd-bc12-1f4bc6292e19-kube-api-access-fvn8w" (OuterVolumeSpecName: "kube-api-access-fvn8w") pod "6da9c462-d90c-4dbd-bc12-1f4bc6292e19" (UID: "6da9c462-d90c-4dbd-bc12-1f4bc6292e19"). InnerVolumeSpecName "kube-api-access-fvn8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 00:37:47.537017 kubelet[2453]: I0710 00:37:47.536558 2453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6da9c462-d90c-4dbd-bc12-1f4bc6292e19-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6da9c462-d90c-4dbd-bc12-1f4bc6292e19" (UID: "6da9c462-d90c-4dbd-bc12-1f4bc6292e19"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 10 00:37:47.537088 systemd[1]: var-lib-kubelet-pods-6da9c462\x2dd90c\x2d4dbd\x2dbc12\x2d1f4bc6292e19-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfvn8w.mount: Deactivated successfully. Jul 10 00:37:47.545828 kubelet[2453]: I0710 00:37:47.545786 2453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6da9c462-d90c-4dbd-bc12-1f4bc6292e19-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6da9c462-d90c-4dbd-bc12-1f4bc6292e19" (UID: "6da9c462-d90c-4dbd-bc12-1f4bc6292e19"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 10 00:37:47.547209 systemd[1]: var-lib-kubelet-pods-6da9c462\x2dd90c\x2d4dbd\x2dbc12\x2d1f4bc6292e19-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 10 00:37:47.629194 kubelet[2453]: I0710 00:37:47.629148 2453 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6da9c462-d90c-4dbd-bc12-1f4bc6292e19-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 10 00:37:47.629194 kubelet[2453]: I0710 00:37:47.629181 2453 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvn8w\" (UniqueName: \"kubernetes.io/projected/6da9c462-d90c-4dbd-bc12-1f4bc6292e19-kube-api-access-fvn8w\") on node \"localhost\" DevicePath \"\"" Jul 10 00:37:47.629194 kubelet[2453]: I0710 00:37:47.629193 2453 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6da9c462-d90c-4dbd-bc12-1f4bc6292e19-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 10 00:37:48.140158 systemd[1]: Removed slice kubepods-besteffort-pod6da9c462_d90c_4dbd_bc12_1f4bc6292e19.slice - libcontainer container kubepods-besteffort-pod6da9c462_d90c_4dbd_bc12_1f4bc6292e19.slice. Jul 10 00:37:48.188693 systemd[1]: Created slice kubepods-besteffort-poda4001044_04eb_4a60_89e2_d5ca44b6800c.slice - libcontainer container kubepods-besteffort-poda4001044_04eb_4a60_89e2_d5ca44b6800c.slice. Jul 10 00:37:48.232841 kubelet[2453]: I0710 00:37:48.232803 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a4001044-04eb-4a60-89e2-d5ca44b6800c-whisker-backend-key-pair\") pod \"whisker-54db5bbcb9-vkt2t\" (UID: \"a4001044-04eb-4a60-89e2-d5ca44b6800c\") " pod="calico-system/whisker-54db5bbcb9-vkt2t" Jul 10 00:37:48.232841 kubelet[2453]: I0710 00:37:48.232848 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4001044-04eb-4a60-89e2-d5ca44b6800c-whisker-ca-bundle\") pod \"whisker-54db5bbcb9-vkt2t\" (UID: \"a4001044-04eb-4a60-89e2-d5ca44b6800c\") " pod="calico-system/whisker-54db5bbcb9-vkt2t" Jul 10 00:37:48.233189 kubelet[2453]: I0710 00:37:48.232881 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmlzc\" (UniqueName: \"kubernetes.io/projected/a4001044-04eb-4a60-89e2-d5ca44b6800c-kube-api-access-gmlzc\") pod \"whisker-54db5bbcb9-vkt2t\" (UID: \"a4001044-04eb-4a60-89e2-d5ca44b6800c\") " pod="calico-system/whisker-54db5bbcb9-vkt2t" Jul 10 00:37:48.493847 containerd[1431]: time="2025-07-10T00:37:48.493789563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54db5bbcb9-vkt2t,Uid:a4001044-04eb-4a60-89e2-d5ca44b6800c,Namespace:calico-system,Attempt:0,}" Jul 10 00:37:48.718010 systemd-networkd[1370]: cali0eebb7a41cd: Link UP Jul 10 00:37:48.718965 systemd-networkd[1370]: cali0eebb7a41cd: Gained carrier Jul 10 00:37:48.734780 containerd[1431]: 2025-07-10 00:37:48.620 [INFO][3932] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:37:48.734780 containerd[1431]: 2025-07-10 00:37:48.636 [INFO][3932] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--54db5bbcb9--vkt2t-eth0 whisker-54db5bbcb9- calico-system a4001044-04eb-4a60-89e2-d5ca44b6800c 880 0 2025-07-10 00:37:48 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:54db5bbcb9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-54db5bbcb9-vkt2t eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0eebb7a41cd [] [] }} ContainerID="48f9087b2d08fdf292c56e9b3d0b235672a739a5df5302bc7619d4d4ded7a89d" Namespace="calico-system" Pod="whisker-54db5bbcb9-vkt2t" WorkloadEndpoint="localhost-k8s-whisker--54db5bbcb9--vkt2t-" Jul 10 00:37:48.734780 containerd[1431]: 2025-07-10 00:37:48.636 [INFO][3932] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="48f9087b2d08fdf292c56e9b3d0b235672a739a5df5302bc7619d4d4ded7a89d" Namespace="calico-system" Pod="whisker-54db5bbcb9-vkt2t" WorkloadEndpoint="localhost-k8s-whisker--54db5bbcb9--vkt2t-eth0" Jul 10 00:37:48.734780 containerd[1431]: 2025-07-10 00:37:48.661 [INFO][3948] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="48f9087b2d08fdf292c56e9b3d0b235672a739a5df5302bc7619d4d4ded7a89d" HandleID="k8s-pod-network.48f9087b2d08fdf292c56e9b3d0b235672a739a5df5302bc7619d4d4ded7a89d" Workload="localhost-k8s-whisker--54db5bbcb9--vkt2t-eth0" Jul 10 00:37:48.734780 containerd[1431]: 2025-07-10 00:37:48.661 [INFO][3948] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="48f9087b2d08fdf292c56e9b3d0b235672a739a5df5302bc7619d4d4ded7a89d" HandleID="k8s-pod-network.48f9087b2d08fdf292c56e9b3d0b235672a739a5df5302bc7619d4d4ded7a89d" Workload="localhost-k8s-whisker--54db5bbcb9--vkt2t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137750), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-54db5bbcb9-vkt2t", "timestamp":"2025-07-10 00:37:48.661487981 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:37:48.734780 containerd[1431]: 2025-07-10 00:37:48.661 [INFO][3948] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:37:48.734780 containerd[1431]: 2025-07-10 00:37:48.661 [INFO][3948] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:37:48.734780 containerd[1431]: 2025-07-10 00:37:48.661 [INFO][3948] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:37:48.734780 containerd[1431]: 2025-07-10 00:37:48.673 [INFO][3948] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.48f9087b2d08fdf292c56e9b3d0b235672a739a5df5302bc7619d4d4ded7a89d" host="localhost" Jul 10 00:37:48.734780 containerd[1431]: 2025-07-10 00:37:48.690 [INFO][3948] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:37:48.734780 containerd[1431]: 2025-07-10 00:37:48.694 [INFO][3948] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:37:48.734780 containerd[1431]: 2025-07-10 00:37:48.696 [INFO][3948] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:37:48.734780 containerd[1431]: 2025-07-10 00:37:48.698 [INFO][3948] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:37:48.734780 containerd[1431]: 2025-07-10 00:37:48.698 [INFO][3948] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.48f9087b2d08fdf292c56e9b3d0b235672a739a5df5302bc7619d4d4ded7a89d" host="localhost" Jul 10 00:37:48.734780 containerd[1431]: 2025-07-10 00:37:48.700 [INFO][3948] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.48f9087b2d08fdf292c56e9b3d0b235672a739a5df5302bc7619d4d4ded7a89d Jul 10 00:37:48.734780 containerd[1431]: 2025-07-10 00:37:48.704 [INFO][3948] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.48f9087b2d08fdf292c56e9b3d0b235672a739a5df5302bc7619d4d4ded7a89d" host="localhost" Jul 10 00:37:48.734780 containerd[1431]: 2025-07-10 00:37:48.709 [INFO][3948] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.48f9087b2d08fdf292c56e9b3d0b235672a739a5df5302bc7619d4d4ded7a89d" host="localhost" Jul 10 00:37:48.734780 containerd[1431]: 2025-07-10 00:37:48.709 [INFO][3948] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.48f9087b2d08fdf292c56e9b3d0b235672a739a5df5302bc7619d4d4ded7a89d" host="localhost" Jul 10 00:37:48.734780 containerd[1431]: 2025-07-10 00:37:48.709 [INFO][3948] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:37:48.734780 containerd[1431]: 2025-07-10 00:37:48.709 [INFO][3948] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="48f9087b2d08fdf292c56e9b3d0b235672a739a5df5302bc7619d4d4ded7a89d" HandleID="k8s-pod-network.48f9087b2d08fdf292c56e9b3d0b235672a739a5df5302bc7619d4d4ded7a89d" Workload="localhost-k8s-whisker--54db5bbcb9--vkt2t-eth0" Jul 10 00:37:48.735476 containerd[1431]: 2025-07-10 00:37:48.711 [INFO][3932] cni-plugin/k8s.go 418: Populated endpoint ContainerID="48f9087b2d08fdf292c56e9b3d0b235672a739a5df5302bc7619d4d4ded7a89d" Namespace="calico-system" Pod="whisker-54db5bbcb9-vkt2t" WorkloadEndpoint="localhost-k8s-whisker--54db5bbcb9--vkt2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--54db5bbcb9--vkt2t-eth0", GenerateName:"whisker-54db5bbcb9-", Namespace:"calico-system", SelfLink:"", UID:"a4001044-04eb-4a60-89e2-d5ca44b6800c", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 37, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54db5bbcb9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-54db5bbcb9-vkt2t", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0eebb7a41cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:37:48.735476 containerd[1431]: 2025-07-10 00:37:48.711 [INFO][3932] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="48f9087b2d08fdf292c56e9b3d0b235672a739a5df5302bc7619d4d4ded7a89d" Namespace="calico-system" Pod="whisker-54db5bbcb9-vkt2t" WorkloadEndpoint="localhost-k8s-whisker--54db5bbcb9--vkt2t-eth0" Jul 10 00:37:48.735476 containerd[1431]: 2025-07-10 00:37:48.711 [INFO][3932] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0eebb7a41cd ContainerID="48f9087b2d08fdf292c56e9b3d0b235672a739a5df5302bc7619d4d4ded7a89d" Namespace="calico-system" Pod="whisker-54db5bbcb9-vkt2t" WorkloadEndpoint="localhost-k8s-whisker--54db5bbcb9--vkt2t-eth0" Jul 10 00:37:48.735476 containerd[1431]: 2025-07-10 00:37:48.723 [INFO][3932] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="48f9087b2d08fdf292c56e9b3d0b235672a739a5df5302bc7619d4d4ded7a89d" Namespace="calico-system" Pod="whisker-54db5bbcb9-vkt2t" WorkloadEndpoint="localhost-k8s-whisker--54db5bbcb9--vkt2t-eth0" Jul 10 00:37:48.735476 containerd[1431]: 2025-07-10 00:37:48.723 [INFO][3932] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="48f9087b2d08fdf292c56e9b3d0b235672a739a5df5302bc7619d4d4ded7a89d" Namespace="calico-system" Pod="whisker-54db5bbcb9-vkt2t" WorkloadEndpoint="localhost-k8s-whisker--54db5bbcb9--vkt2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--54db5bbcb9--vkt2t-eth0", GenerateName:"whisker-54db5bbcb9-", Namespace:"calico-system", SelfLink:"", UID:"a4001044-04eb-4a60-89e2-d5ca44b6800c", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 37, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54db5bbcb9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"48f9087b2d08fdf292c56e9b3d0b235672a739a5df5302bc7619d4d4ded7a89d", Pod:"whisker-54db5bbcb9-vkt2t", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0eebb7a41cd", MAC:"ba:c8:53:4b:16:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:37:48.735476 containerd[1431]: 2025-07-10 00:37:48.732 [INFO][3932] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="48f9087b2d08fdf292c56e9b3d0b235672a739a5df5302bc7619d4d4ded7a89d" Namespace="calico-system" Pod="whisker-54db5bbcb9-vkt2t" WorkloadEndpoint="localhost-k8s-whisker--54db5bbcb9--vkt2t-eth0" Jul 10 00:37:48.771264 containerd[1431]: time="2025-07-10T00:37:48.769952359Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:37:48.771264 containerd[1431]: time="2025-07-10T00:37:48.770438120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:37:48.771264 containerd[1431]: time="2025-07-10T00:37:48.770453080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:48.771492 containerd[1431]: time="2025-07-10T00:37:48.770557201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:48.802558 systemd[1]: Started cri-containerd-48f9087b2d08fdf292c56e9b3d0b235672a739a5df5302bc7619d4d4ded7a89d.scope - libcontainer container 48f9087b2d08fdf292c56e9b3d0b235672a739a5df5302bc7619d4d4ded7a89d. Jul 10 00:37:48.813351 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:37:48.837862 containerd[1431]: time="2025-07-10T00:37:48.837812136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54db5bbcb9-vkt2t,Uid:a4001044-04eb-4a60-89e2-d5ca44b6800c,Namespace:calico-system,Attempt:0,} returns sandbox id \"48f9087b2d08fdf292c56e9b3d0b235672a739a5df5302bc7619d4d4ded7a89d\"" Jul 10 00:37:48.839778 containerd[1431]: time="2025-07-10T00:37:48.839536019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 10 00:37:48.916851 kubelet[2453]: I0710 00:37:48.916792 2453 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6da9c462-d90c-4dbd-bc12-1f4bc6292e19" path="/var/lib/kubelet/pods/6da9c462-d90c-4dbd-bc12-1f4bc6292e19/volumes" Jul 10 00:37:49.702038 containerd[1431]: time="2025-07-10T00:37:49.701191306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:49.702038 containerd[1431]: time="2025-07-10T00:37:49.701770787Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 10 00:37:49.703886 containerd[1431]: time="2025-07-10T00:37:49.703849591Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:49.707530 containerd[1431]: time="2025-07-10T00:37:49.707064877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:49.708152 containerd[1431]: time="2025-07-10T00:37:49.708111519Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 868.53866ms" Jul 10 00:37:49.708152 containerd[1431]: time="2025-07-10T00:37:49.708149119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 10 00:37:49.716723 containerd[1431]: time="2025-07-10T00:37:49.716607295Z" level=info msg="CreateContainer within sandbox \"48f9087b2d08fdf292c56e9b3d0b235672a739a5df5302bc7619d4d4ded7a89d\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 10 00:37:49.731478 containerd[1431]: time="2025-07-10T00:37:49.731426883Z" level=info msg="CreateContainer within sandbox \"48f9087b2d08fdf292c56e9b3d0b235672a739a5df5302bc7619d4d4ded7a89d\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"8f7f72f4e42fd32d79480b3c63e0b8bbb6a4d32d9debda9d221b8f83e49872b5\"" Jul 10 00:37:49.731957 containerd[1431]: time="2025-07-10T00:37:49.731922284Z" level=info msg="StartContainer for \"8f7f72f4e42fd32d79480b3c63e0b8bbb6a4d32d9debda9d221b8f83e49872b5\"" Jul 10 00:37:49.761565 systemd[1]: Started cri-containerd-8f7f72f4e42fd32d79480b3c63e0b8bbb6a4d32d9debda9d221b8f83e49872b5.scope - libcontainer container 8f7f72f4e42fd32d79480b3c63e0b8bbb6a4d32d9debda9d221b8f83e49872b5. Jul 10 00:37:49.790497 containerd[1431]: time="2025-07-10T00:37:49.790386235Z" level=info msg="StartContainer for \"8f7f72f4e42fd32d79480b3c63e0b8bbb6a4d32d9debda9d221b8f83e49872b5\" returns successfully" Jul 10 00:37:49.794095 containerd[1431]: time="2025-07-10T00:37:49.793970241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 10 00:37:50.612549 systemd-networkd[1370]: cali0eebb7a41cd: Gained IPv6LL Jul 10 00:37:51.403105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1822679417.mount: Deactivated successfully. Jul 10 00:37:51.421568 containerd[1431]: time="2025-07-10T00:37:51.421057738Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:51.421934 containerd[1431]: time="2025-07-10T00:37:51.421583219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 10 00:37:51.425848 containerd[1431]: time="2025-07-10T00:37:51.425638186Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:51.428662 containerd[1431]: time="2025-07-10T00:37:51.428618671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:51.429893 containerd[1431]: time="2025-07-10T00:37:51.429858553Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.635852752s" Jul 10 00:37:51.430104 containerd[1431]: time="2025-07-10T00:37:51.429994753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 10 00:37:51.432438 containerd[1431]: time="2025-07-10T00:37:51.432407797Z" level=info msg="CreateContainer within sandbox \"48f9087b2d08fdf292c56e9b3d0b235672a739a5df5302bc7619d4d4ded7a89d\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 10 00:37:51.444182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1279850748.mount: Deactivated successfully. Jul 10 00:37:51.449083 containerd[1431]: time="2025-07-10T00:37:51.449039225Z" level=info msg="CreateContainer within sandbox \"48f9087b2d08fdf292c56e9b3d0b235672a739a5df5302bc7619d4d4ded7a89d\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"2bf26e3dd4fb62457264b8d49bf9e3cce38dbaba523d42402e4fbec17ee9a35c\"" Jul 10 00:37:51.449653 containerd[1431]: time="2025-07-10T00:37:51.449622026Z" level=info msg="StartContainer for \"2bf26e3dd4fb62457264b8d49bf9e3cce38dbaba523d42402e4fbec17ee9a35c\"" Jul 10 00:37:51.489559 systemd[1]: Started cri-containerd-2bf26e3dd4fb62457264b8d49bf9e3cce38dbaba523d42402e4fbec17ee9a35c.scope - libcontainer container 2bf26e3dd4fb62457264b8d49bf9e3cce38dbaba523d42402e4fbec17ee9a35c. Jul 10 00:37:51.519745 containerd[1431]: time="2025-07-10T00:37:51.519596302Z" level=info msg="StartContainer for \"2bf26e3dd4fb62457264b8d49bf9e3cce38dbaba523d42402e4fbec17ee9a35c\" returns successfully" Jul 10 00:37:52.157858 kubelet[2453]: I0710 00:37:52.157783 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-54db5bbcb9-vkt2t" podStartSLOduration=1.566317369 podStartE2EDuration="4.157768424s" podCreationTimestamp="2025-07-10 00:37:48 +0000 UTC" firstStartedPulling="2025-07-10 00:37:48.839144539 +0000 UTC m=+32.024248527" lastFinishedPulling="2025-07-10 00:37:51.430595554 +0000 UTC m=+34.615699582" observedRunningTime="2025-07-10 00:37:52.156540062 +0000 UTC m=+35.341644090" watchObservedRunningTime="2025-07-10 00:37:52.157768424 +0000 UTC m=+35.342872412" Jul 10 00:37:54.908486 containerd[1431]: time="2025-07-10T00:37:54.908390154Z" level=info msg="StopPodSandbox for \"4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6\"" Jul 10 00:37:55.032920 containerd[1431]: 2025-07-10 00:37:54.956 [INFO][4256] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" Jul 10 00:37:55.032920 containerd[1431]: 2025-07-10 00:37:54.956 [INFO][4256] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" iface="eth0" netns="/var/run/netns/cni-692714cd-a49e-f62b-cb3c-97b1370d14a9" Jul 10 00:37:55.032920 containerd[1431]: 2025-07-10 00:37:54.956 [INFO][4256] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" iface="eth0" netns="/var/run/netns/cni-692714cd-a49e-f62b-cb3c-97b1370d14a9" Jul 10 00:37:55.032920 containerd[1431]: 2025-07-10 00:37:54.957 [INFO][4256] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" iface="eth0" netns="/var/run/netns/cni-692714cd-a49e-f62b-cb3c-97b1370d14a9" Jul 10 00:37:55.032920 containerd[1431]: 2025-07-10 00:37:54.957 [INFO][4256] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" Jul 10 00:37:55.032920 containerd[1431]: 2025-07-10 00:37:54.957 [INFO][4256] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" Jul 10 00:37:55.032920 containerd[1431]: 2025-07-10 00:37:55.011 [INFO][4271] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" HandleID="k8s-pod-network.4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" Workload="localhost-k8s-calico--apiserver--9b5f696fd--429ml-eth0" Jul 10 00:37:55.032920 containerd[1431]: 2025-07-10 00:37:55.011 [INFO][4271] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:37:55.032920 containerd[1431]: 2025-07-10 00:37:55.011 [INFO][4271] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:37:55.032920 containerd[1431]: 2025-07-10 00:37:55.023 [WARNING][4271] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" HandleID="k8s-pod-network.4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" Workload="localhost-k8s-calico--apiserver--9b5f696fd--429ml-eth0" Jul 10 00:37:55.032920 containerd[1431]: 2025-07-10 00:37:55.023 [INFO][4271] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" HandleID="k8s-pod-network.4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" Workload="localhost-k8s-calico--apiserver--9b5f696fd--429ml-eth0" Jul 10 00:37:55.032920 containerd[1431]: 2025-07-10 00:37:55.027 [INFO][4271] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:37:55.032920 containerd[1431]: 2025-07-10 00:37:55.029 [INFO][4256] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" Jul 10 00:37:55.033426 containerd[1431]: time="2025-07-10T00:37:55.033167522Z" level=info msg="TearDown network for sandbox \"4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6\" successfully" Jul 10 00:37:55.033426 containerd[1431]: time="2025-07-10T00:37:55.033221482Z" level=info msg="StopPodSandbox for \"4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6\" returns successfully" Jul 10 00:37:55.034688 containerd[1431]: time="2025-07-10T00:37:55.034095763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b5f696fd-429ml,Uid:1f658df4-5506-4d46-bb23-7f9741b9a122,Namespace:calico-apiserver,Attempt:1,}" Jul 10 00:37:55.035855 systemd[1]: run-netns-cni\x2d692714cd\x2da49e\x2df62b\x2dcb3c\x2d97b1370d14a9.mount: Deactivated successfully. Jul 10 00:37:55.225593 systemd-networkd[1370]: cali468623cb9d2: Link UP Jul 10 00:37:55.225795 systemd-networkd[1370]: cali468623cb9d2: Gained carrier Jul 10 00:37:55.246115 containerd[1431]: 2025-07-10 00:37:55.115 [INFO][4294] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:37:55.246115 containerd[1431]: 2025-07-10 00:37:55.128 [INFO][4294] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--9b5f696fd--429ml-eth0 calico-apiserver-9b5f696fd- calico-apiserver 1f658df4-5506-4d46-bb23-7f9741b9a122 916 0 2025-07-10 00:37:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9b5f696fd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-9b5f696fd-429ml eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali468623cb9d2 [] [] }} ContainerID="3ccb47c543917df4bed1b333fc01e70798addb524c7ac41e593e16652e0b4209" Namespace="calico-apiserver" Pod="calico-apiserver-9b5f696fd-429ml" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b5f696fd--429ml-" Jul 10 00:37:55.246115 containerd[1431]: 2025-07-10 00:37:55.128 [INFO][4294] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ccb47c543917df4bed1b333fc01e70798addb524c7ac41e593e16652e0b4209" Namespace="calico-apiserver" Pod="calico-apiserver-9b5f696fd-429ml" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b5f696fd--429ml-eth0" Jul 10 00:37:55.246115 containerd[1431]: 2025-07-10 00:37:55.154 [INFO][4310] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ccb47c543917df4bed1b333fc01e70798addb524c7ac41e593e16652e0b4209" HandleID="k8s-pod-network.3ccb47c543917df4bed1b333fc01e70798addb524c7ac41e593e16652e0b4209" Workload="localhost-k8s-calico--apiserver--9b5f696fd--429ml-eth0" Jul 10 00:37:55.246115 containerd[1431]: 2025-07-10 00:37:55.155 [INFO][4310] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3ccb47c543917df4bed1b333fc01e70798addb524c7ac41e593e16652e0b4209" HandleID="k8s-pod-network.3ccb47c543917df4bed1b333fc01e70798addb524c7ac41e593e16652e0b4209" Workload="localhost-k8s-calico--apiserver--9b5f696fd--429ml-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-9b5f696fd-429ml", "timestamp":"2025-07-10 00:37:55.154869198 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:37:55.246115 containerd[1431]: 2025-07-10 00:37:55.155 [INFO][4310] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:37:55.246115 containerd[1431]: 2025-07-10 00:37:55.155 [INFO][4310] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:37:55.246115 containerd[1431]: 2025-07-10 00:37:55.155 [INFO][4310] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:37:55.246115 containerd[1431]: 2025-07-10 00:37:55.164 [INFO][4310] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3ccb47c543917df4bed1b333fc01e70798addb524c7ac41e593e16652e0b4209" host="localhost" Jul 10 00:37:55.246115 containerd[1431]: 2025-07-10 00:37:55.169 [INFO][4310] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:37:55.246115 containerd[1431]: 2025-07-10 00:37:55.173 [INFO][4310] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:37:55.246115 containerd[1431]: 2025-07-10 00:37:55.175 [INFO][4310] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:37:55.246115 containerd[1431]: 2025-07-10 00:37:55.177 [INFO][4310] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:37:55.246115 containerd[1431]: 2025-07-10 00:37:55.177 [INFO][4310] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3ccb47c543917df4bed1b333fc01e70798addb524c7ac41e593e16652e0b4209" host="localhost" Jul 10 00:37:55.246115 containerd[1431]: 2025-07-10 00:37:55.179 [INFO][4310] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3ccb47c543917df4bed1b333fc01e70798addb524c7ac41e593e16652e0b4209 Jul 10 00:37:55.246115 containerd[1431]: 2025-07-10 00:37:55.214 [INFO][4310] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3ccb47c543917df4bed1b333fc01e70798addb524c7ac41e593e16652e0b4209" host="localhost" Jul 10 00:37:55.246115 containerd[1431]: 2025-07-10 00:37:55.221 [INFO][4310] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3ccb47c543917df4bed1b333fc01e70798addb524c7ac41e593e16652e0b4209" host="localhost" Jul 10 00:37:55.246115 containerd[1431]: 2025-07-10 00:37:55.221 [INFO][4310] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3ccb47c543917df4bed1b333fc01e70798addb524c7ac41e593e16652e0b4209" host="localhost" Jul 10 00:37:55.246115 containerd[1431]: 2025-07-10 00:37:55.221 [INFO][4310] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:37:55.246115 containerd[1431]: 2025-07-10 00:37:55.221 [INFO][4310] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3ccb47c543917df4bed1b333fc01e70798addb524c7ac41e593e16652e0b4209" HandleID="k8s-pod-network.3ccb47c543917df4bed1b333fc01e70798addb524c7ac41e593e16652e0b4209" Workload="localhost-k8s-calico--apiserver--9b5f696fd--429ml-eth0" Jul 10 00:37:55.246941 containerd[1431]: 2025-07-10 00:37:55.223 [INFO][4294] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ccb47c543917df4bed1b333fc01e70798addb524c7ac41e593e16652e0b4209" Namespace="calico-apiserver" Pod="calico-apiserver-9b5f696fd-429ml" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b5f696fd--429ml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9b5f696fd--429ml-eth0", GenerateName:"calico-apiserver-9b5f696fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"1f658df4-5506-4d46-bb23-7f9741b9a122", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 37, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b5f696fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-9b5f696fd-429ml", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali468623cb9d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:37:55.246941 containerd[1431]: 2025-07-10 00:37:55.223 [INFO][4294] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3ccb47c543917df4bed1b333fc01e70798addb524c7ac41e593e16652e0b4209" Namespace="calico-apiserver" Pod="calico-apiserver-9b5f696fd-429ml" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b5f696fd--429ml-eth0" Jul 10 00:37:55.246941 containerd[1431]: 2025-07-10 00:37:55.223 [INFO][4294] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali468623cb9d2 ContainerID="3ccb47c543917df4bed1b333fc01e70798addb524c7ac41e593e16652e0b4209" Namespace="calico-apiserver" Pod="calico-apiserver-9b5f696fd-429ml" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b5f696fd--429ml-eth0" Jul 10 00:37:55.246941 containerd[1431]: 2025-07-10 00:37:55.227 [INFO][4294] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ccb47c543917df4bed1b333fc01e70798addb524c7ac41e593e16652e0b4209" Namespace="calico-apiserver" Pod="calico-apiserver-9b5f696fd-429ml" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b5f696fd--429ml-eth0" Jul 10 00:37:55.246941 containerd[1431]: 2025-07-10 00:37:55.228 [INFO][4294] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ccb47c543917df4bed1b333fc01e70798addb524c7ac41e593e16652e0b4209" Namespace="calico-apiserver" Pod="calico-apiserver-9b5f696fd-429ml" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b5f696fd--429ml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9b5f696fd--429ml-eth0", GenerateName:"calico-apiserver-9b5f696fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"1f658df4-5506-4d46-bb23-7f9741b9a122", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 37, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b5f696fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ccb47c543917df4bed1b333fc01e70798addb524c7ac41e593e16652e0b4209", Pod:"calico-apiserver-9b5f696fd-429ml", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali468623cb9d2", MAC:"3e:2b:ab:05:89:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:37:55.246941 containerd[1431]: 2025-07-10 00:37:55.242 [INFO][4294] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ccb47c543917df4bed1b333fc01e70798addb524c7ac41e593e16652e0b4209" Namespace="calico-apiserver" Pod="calico-apiserver-9b5f696fd-429ml" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b5f696fd--429ml-eth0" Jul 10 00:37:55.263330 containerd[1431]: time="2025-07-10T00:37:55.263029376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:37:55.263330 containerd[1431]: time="2025-07-10T00:37:55.263141536Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:37:55.263330 containerd[1431]: time="2025-07-10T00:37:55.263180856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:55.263330 containerd[1431]: time="2025-07-10T00:37:55.263295537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:55.267215 kubelet[2453]: I0710 00:37:55.267175 2453 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:37:55.269624 kubelet[2453]: E0710 00:37:55.269345 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:55.289708 systemd[1]: Started cri-containerd-3ccb47c543917df4bed1b333fc01e70798addb524c7ac41e593e16652e0b4209.scope - libcontainer container 3ccb47c543917df4bed1b333fc01e70798addb524c7ac41e593e16652e0b4209. Jul 10 00:37:55.302633 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:37:55.334368 containerd[1431]: time="2025-07-10T00:37:55.334326668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b5f696fd-429ml,Uid:1f658df4-5506-4d46-bb23-7f9741b9a122,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3ccb47c543917df4bed1b333fc01e70798addb524c7ac41e593e16652e0b4209\"" Jul 10 00:37:55.335996 containerd[1431]: time="2025-07-10T00:37:55.335968910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 00:37:55.847637 kernel: bpftool[4387]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 10 00:37:55.917444 containerd[1431]: time="2025-07-10T00:37:55.913576210Z" level=info msg="StopPodSandbox for \"f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95\"" Jul 10 00:37:56.010411 containerd[1431]: 2025-07-10 00:37:55.960 [INFO][4399] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" Jul 10 00:37:56.010411 containerd[1431]: 2025-07-10 00:37:55.960 [INFO][4399] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" iface="eth0" netns="/var/run/netns/cni-270d33f9-ea2a-f40f-f906-f99242f40b91" Jul 10 00:37:56.010411 containerd[1431]: 2025-07-10 00:37:55.960 [INFO][4399] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" iface="eth0" netns="/var/run/netns/cni-270d33f9-ea2a-f40f-f906-f99242f40b91" Jul 10 00:37:56.010411 containerd[1431]: 2025-07-10 00:37:55.961 [INFO][4399] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" iface="eth0" netns="/var/run/netns/cni-270d33f9-ea2a-f40f-f906-f99242f40b91" Jul 10 00:37:56.010411 containerd[1431]: 2025-07-10 00:37:55.961 [INFO][4399] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" Jul 10 00:37:56.010411 containerd[1431]: 2025-07-10 00:37:55.961 [INFO][4399] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" Jul 10 00:37:56.010411 containerd[1431]: 2025-07-10 00:37:55.987 [INFO][4414] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" HandleID="k8s-pod-network.f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" Workload="localhost-k8s-coredns--7c65d6cfc9--qzjj6-eth0" Jul 10 00:37:56.010411 containerd[1431]: 2025-07-10 00:37:55.987 [INFO][4414] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:37:56.010411 containerd[1431]: 2025-07-10 00:37:55.987 [INFO][4414] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:37:56.010411 containerd[1431]: 2025-07-10 00:37:55.995 [WARNING][4414] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" HandleID="k8s-pod-network.f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" Workload="localhost-k8s-coredns--7c65d6cfc9--qzjj6-eth0" Jul 10 00:37:56.010411 containerd[1431]: 2025-07-10 00:37:55.996 [INFO][4414] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" HandleID="k8s-pod-network.f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" Workload="localhost-k8s-coredns--7c65d6cfc9--qzjj6-eth0" Jul 10 00:37:56.010411 containerd[1431]: 2025-07-10 00:37:55.997 [INFO][4414] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:37:56.010411 containerd[1431]: 2025-07-10 00:37:56.003 [INFO][4399] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" Jul 10 00:37:56.010411 containerd[1431]: time="2025-07-10T00:37:56.007452450Z" level=info msg="TearDown network for sandbox \"f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95\" successfully" Jul 10 00:37:56.010411 containerd[1431]: time="2025-07-10T00:37:56.007480930Z" level=info msg="StopPodSandbox for \"f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95\" returns successfully" Jul 10 00:37:56.011592 kubelet[2453]: E0710 00:37:56.011078 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:56.011889 containerd[1431]: time="2025-07-10T00:37:56.011859655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qzjj6,Uid:2d71dbf9-7620-445d-8d35-2cc9ef195ea7,Namespace:kube-system,Attempt:1,}" Jul 10 00:37:56.040877 systemd[1]: run-netns-cni\x2d270d33f9\x2dea2a\x2df40f\x2df906\x2df99242f40b91.mount: Deactivated successfully. Jul 10 00:37:56.048467 systemd-networkd[1370]: vxlan.calico: Link UP Jul 10 00:37:56.048474 systemd-networkd[1370]: vxlan.calico: Gained carrier Jul 10 00:37:56.156916 kubelet[2453]: E0710 00:37:56.156815 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:56.213142 systemd-networkd[1370]: cali409f727fa17: Link UP Jul 10 00:37:56.214551 systemd-networkd[1370]: cali409f727fa17: Gained carrier Jul 10 00:37:56.235451 containerd[1431]: 2025-07-10 00:37:56.109 [INFO][4438] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--qzjj6-eth0 coredns-7c65d6cfc9- kube-system 2d71dbf9-7620-445d-8d35-2cc9ef195ea7 930 0 2025-07-10 00:37:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-qzjj6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali409f727fa17 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fcf86e14ef41e8cb38c862903f45486efd530bfb95d7abc0f744ba950ba593d5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qzjj6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qzjj6-" Jul 10 00:37:56.235451 containerd[1431]: 2025-07-10 00:37:56.110 [INFO][4438] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fcf86e14ef41e8cb38c862903f45486efd530bfb95d7abc0f744ba950ba593d5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qzjj6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qzjj6-eth0" Jul 10 00:37:56.235451 containerd[1431]: 2025-07-10 00:37:56.154 [INFO][4490] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fcf86e14ef41e8cb38c862903f45486efd530bfb95d7abc0f744ba950ba593d5" HandleID="k8s-pod-network.fcf86e14ef41e8cb38c862903f45486efd530bfb95d7abc0f744ba950ba593d5" Workload="localhost-k8s-coredns--7c65d6cfc9--qzjj6-eth0" Jul 10 00:37:56.235451 containerd[1431]: 2025-07-10 00:37:56.154 [INFO][4490] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fcf86e14ef41e8cb38c862903f45486efd530bfb95d7abc0f744ba950ba593d5" HandleID="k8s-pod-network.fcf86e14ef41e8cb38c862903f45486efd530bfb95d7abc0f744ba950ba593d5" Workload="localhost-k8s-coredns--7c65d6cfc9--qzjj6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035d850), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-qzjj6", "timestamp":"2025-07-10 00:37:56.154229746 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:37:56.235451 containerd[1431]: 2025-07-10 00:37:56.154 [INFO][4490] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:37:56.235451 containerd[1431]: 2025-07-10 00:37:56.154 [INFO][4490] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:37:56.235451 containerd[1431]: 2025-07-10 00:37:56.154 [INFO][4490] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:37:56.235451 containerd[1431]: 2025-07-10 00:37:56.173 [INFO][4490] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fcf86e14ef41e8cb38c862903f45486efd530bfb95d7abc0f744ba950ba593d5" host="localhost" Jul 10 00:37:56.235451 containerd[1431]: 2025-07-10 00:37:56.184 [INFO][4490] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:37:56.235451 containerd[1431]: 2025-07-10 00:37:56.190 [INFO][4490] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:37:56.235451 containerd[1431]: 2025-07-10 00:37:56.192 [INFO][4490] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:37:56.235451 containerd[1431]: 2025-07-10 00:37:56.195 [INFO][4490] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:37:56.235451 containerd[1431]: 2025-07-10 00:37:56.195 [INFO][4490] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fcf86e14ef41e8cb38c862903f45486efd530bfb95d7abc0f744ba950ba593d5" host="localhost" Jul 10 00:37:56.235451 containerd[1431]: 2025-07-10 00:37:56.197 [INFO][4490] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fcf86e14ef41e8cb38c862903f45486efd530bfb95d7abc0f744ba950ba593d5 Jul 10 00:37:56.235451 containerd[1431]: 2025-07-10 00:37:56.201 [INFO][4490] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fcf86e14ef41e8cb38c862903f45486efd530bfb95d7abc0f744ba950ba593d5" host="localhost" Jul 10 00:37:56.235451 containerd[1431]: 2025-07-10 00:37:56.207 [INFO][4490] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.fcf86e14ef41e8cb38c862903f45486efd530bfb95d7abc0f744ba950ba593d5" host="localhost" Jul 10 00:37:56.235451 containerd[1431]: 2025-07-10 00:37:56.207 [INFO][4490] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.fcf86e14ef41e8cb38c862903f45486efd530bfb95d7abc0f744ba950ba593d5" host="localhost" Jul 10 00:37:56.235451 containerd[1431]: 2025-07-10 00:37:56.207 [INFO][4490] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:37:56.235451 containerd[1431]: 2025-07-10 00:37:56.207 [INFO][4490] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="fcf86e14ef41e8cb38c862903f45486efd530bfb95d7abc0f744ba950ba593d5" HandleID="k8s-pod-network.fcf86e14ef41e8cb38c862903f45486efd530bfb95d7abc0f744ba950ba593d5" Workload="localhost-k8s-coredns--7c65d6cfc9--qzjj6-eth0" Jul 10 00:37:56.236072 containerd[1431]: 2025-07-10 00:37:56.209 [INFO][4438] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fcf86e14ef41e8cb38c862903f45486efd530bfb95d7abc0f744ba950ba593d5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qzjj6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qzjj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--qzjj6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2d71dbf9-7620-445d-8d35-2cc9ef195ea7", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 37, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-qzjj6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali409f727fa17", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:37:56.236072 containerd[1431]: 2025-07-10 00:37:56.209 [INFO][4438] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="fcf86e14ef41e8cb38c862903f45486efd530bfb95d7abc0f744ba950ba593d5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qzjj6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qzjj6-eth0" Jul 10 00:37:56.236072 containerd[1431]: 2025-07-10 00:37:56.209 [INFO][4438] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali409f727fa17 ContainerID="fcf86e14ef41e8cb38c862903f45486efd530bfb95d7abc0f744ba950ba593d5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qzjj6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qzjj6-eth0" Jul 10 00:37:56.236072 containerd[1431]: 2025-07-10 00:37:56.215 [INFO][4438] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fcf86e14ef41e8cb38c862903f45486efd530bfb95d7abc0f744ba950ba593d5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qzjj6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qzjj6-eth0" Jul 10 00:37:56.236072 containerd[1431]: 2025-07-10 00:37:56.215 [INFO][4438] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fcf86e14ef41e8cb38c862903f45486efd530bfb95d7abc0f744ba950ba593d5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qzjj6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qzjj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--qzjj6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2d71dbf9-7620-445d-8d35-2cc9ef195ea7", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 37, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fcf86e14ef41e8cb38c862903f45486efd530bfb95d7abc0f744ba950ba593d5", Pod:"coredns-7c65d6cfc9-qzjj6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali409f727fa17", MAC:"c2:03:86:1c:77:a8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:37:56.236072 containerd[1431]: 2025-07-10 00:37:56.230 [INFO][4438] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fcf86e14ef41e8cb38c862903f45486efd530bfb95d7abc0f744ba950ba593d5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qzjj6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qzjj6-eth0" Jul 10 00:37:56.265070 containerd[1431]: time="2025-07-10T00:37:56.264962999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:37:56.265070 containerd[1431]: time="2025-07-10T00:37:56.265022399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:37:56.265070 containerd[1431]: time="2025-07-10T00:37:56.265033319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:56.265293 containerd[1431]: time="2025-07-10T00:37:56.265110959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:56.291893 systemd[1]: Started cri-containerd-fcf86e14ef41e8cb38c862903f45486efd530bfb95d7abc0f744ba950ba593d5.scope - libcontainer container fcf86e14ef41e8cb38c862903f45486efd530bfb95d7abc0f744ba950ba593d5. Jul 10 00:37:56.309101 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:37:56.360111 containerd[1431]: time="2025-07-10T00:37:56.360054753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qzjj6,Uid:2d71dbf9-7620-445d-8d35-2cc9ef195ea7,Namespace:kube-system,Attempt:1,} returns sandbox id \"fcf86e14ef41e8cb38c862903f45486efd530bfb95d7abc0f744ba950ba593d5\"" Jul 10 00:37:56.372130 kubelet[2453]: E0710 00:37:56.371725 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:56.375349 containerd[1431]: time="2025-07-10T00:37:56.375147611Z" level=info msg="CreateContainer within sandbox \"fcf86e14ef41e8cb38c862903f45486efd530bfb95d7abc0f744ba950ba593d5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:37:56.419469 containerd[1431]: time="2025-07-10T00:37:56.419176784Z" level=info msg="CreateContainer within sandbox \"fcf86e14ef41e8cb38c862903f45486efd530bfb95d7abc0f744ba950ba593d5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"092d080a24c045e9c0a4fa26ab7e94c0ba233da2ab889fcc0e79fd6874179669\"" Jul 10 00:37:56.422616 containerd[1431]: time="2025-07-10T00:37:56.422578548Z" level=info msg="StartContainer for \"092d080a24c045e9c0a4fa26ab7e94c0ba233da2ab889fcc0e79fd6874179669\"" Jul 10 00:37:56.459547 systemd[1]: Started cri-containerd-092d080a24c045e9c0a4fa26ab7e94c0ba233da2ab889fcc0e79fd6874179669.scope - libcontainer container 092d080a24c045e9c0a4fa26ab7e94c0ba233da2ab889fcc0e79fd6874179669. Jul 10 00:37:56.502313 containerd[1431]: time="2025-07-10T00:37:56.502260404Z" level=info msg="StartContainer for \"092d080a24c045e9c0a4fa26ab7e94c0ba233da2ab889fcc0e79fd6874179669\" returns successfully" Jul 10 00:37:57.012698 systemd-networkd[1370]: cali468623cb9d2: Gained IPv6LL Jul 10 00:37:57.022874 systemd[1]: Started sshd@7-10.0.0.112:22-10.0.0.1:59620.service - OpenSSH per-connection server daemon (10.0.0.1:59620). Jul 10 00:37:57.037995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2897699953.mount: Deactivated successfully. Jul 10 00:37:57.072571 sshd[4650]: Accepted publickey for core from 10.0.0.1 port 59620 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:37:57.075053 sshd[4650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:37:57.085692 systemd-logind[1418]: New session 8 of user core. Jul 10 00:37:57.093503 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 00:37:57.107015 containerd[1431]: time="2025-07-10T00:37:57.106973483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:57.108597 containerd[1431]: time="2025-07-10T00:37:57.108555084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 10 00:37:57.110180 containerd[1431]: time="2025-07-10T00:37:57.110138686Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:57.114563 containerd[1431]: time="2025-07-10T00:37:57.114499971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:37:57.115392 containerd[1431]: time="2025-07-10T00:37:57.115348172Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.779344942s" Jul 10 00:37:57.115505 containerd[1431]: time="2025-07-10T00:37:57.115396012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 10 00:37:57.117313 containerd[1431]: time="2025-07-10T00:37:57.117278134Z" level=info msg="CreateContainer within sandbox \"3ccb47c543917df4bed1b333fc01e70798addb524c7ac41e593e16652e0b4209\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 00:37:57.133475 containerd[1431]: time="2025-07-10T00:37:57.133434952Z" level=info msg="CreateContainer within sandbox \"3ccb47c543917df4bed1b333fc01e70798addb524c7ac41e593e16652e0b4209\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4710c95fbb423d70e89cf5a2cea0b781632b456ccf28b1c7b3cd8162b7a4945f\"" Jul 10 00:37:57.134541 containerd[1431]: time="2025-07-10T00:37:57.134510234Z" level=info msg="StartContainer for \"4710c95fbb423d70e89cf5a2cea0b781632b456ccf28b1c7b3cd8162b7a4945f\"" Jul 10 00:37:57.165572 systemd[1]: Started cri-containerd-4710c95fbb423d70e89cf5a2cea0b781632b456ccf28b1c7b3cd8162b7a4945f.scope - libcontainer container 4710c95fbb423d70e89cf5a2cea0b781632b456ccf28b1c7b3cd8162b7a4945f. Jul 10 00:37:57.172725 kubelet[2453]: E0710 00:37:57.172641 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:57.195517 kubelet[2453]: I0710 00:37:57.195244 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-qzjj6" podStartSLOduration=35.195125182 podStartE2EDuration="35.195125182s" podCreationTimestamp="2025-07-10 00:37:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:37:57.181561607 +0000 UTC m=+40.366665635" watchObservedRunningTime="2025-07-10 00:37:57.195125182 +0000 UTC m=+40.380229210" Jul 10 00:37:57.234412 containerd[1431]: time="2025-07-10T00:37:57.229329540Z" level=info msg="StartContainer for \"4710c95fbb423d70e89cf5a2cea0b781632b456ccf28b1c7b3cd8162b7a4945f\" returns successfully" Jul 10 00:37:57.379621 sshd[4650]: pam_unix(sshd:session): session closed for user core Jul 10 00:37:57.383424 systemd[1]: sshd@7-10.0.0.112:22-10.0.0.1:59620.service: Deactivated successfully. Jul 10 00:37:57.385083 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 00:37:57.385776 systemd-logind[1418]: Session 8 logged out. Waiting for processes to exit. Jul 10 00:37:57.386686 systemd-logind[1418]: Removed session 8. Jul 10 00:37:57.780722 systemd-networkd[1370]: vxlan.calico: Gained IPv6LL Jul 10 00:37:57.908478 containerd[1431]: time="2025-07-10T00:37:57.908423425Z" level=info msg="StopPodSandbox for \"67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0\"" Jul 10 00:37:57.909376 containerd[1431]: time="2025-07-10T00:37:57.909294906Z" level=info msg="StopPodSandbox for \"a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25\"" Jul 10 00:37:57.909376 containerd[1431]: time="2025-07-10T00:37:57.909327346Z" level=info msg="StopPodSandbox for \"89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df\"" Jul 10 00:37:57.909505 containerd[1431]: time="2025-07-10T00:37:57.909339866Z" level=info msg="StopPodSandbox for \"10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3\"" Jul 10 00:37:57.972840 systemd-networkd[1370]: cali409f727fa17: Gained IPv6LL Jul 10 00:37:58.055662 containerd[1431]: 2025-07-10 00:37:57.985 [INFO][4770] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" Jul 10 00:37:58.055662 containerd[1431]: 2025-07-10 00:37:57.986 [INFO][4770] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" iface="eth0" netns="/var/run/netns/cni-2eb819a3-0ce4-de75-a9e3-471779756af9" Jul 10 00:37:58.055662 containerd[1431]: 2025-07-10 00:37:57.987 [INFO][4770] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" iface="eth0" netns="/var/run/netns/cni-2eb819a3-0ce4-de75-a9e3-471779756af9" Jul 10 00:37:58.055662 containerd[1431]: 2025-07-10 00:37:57.987 [INFO][4770] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" iface="eth0" netns="/var/run/netns/cni-2eb819a3-0ce4-de75-a9e3-471779756af9" Jul 10 00:37:58.055662 containerd[1431]: 2025-07-10 00:37:57.988 [INFO][4770] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" Jul 10 00:37:58.055662 containerd[1431]: 2025-07-10 00:37:57.988 [INFO][4770] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" Jul 10 00:37:58.055662 containerd[1431]: 2025-07-10 00:37:58.030 [INFO][4796] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" HandleID="k8s-pod-network.a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" Workload="localhost-k8s-coredns--7c65d6cfc9--rsqvw-eth0" Jul 10 00:37:58.055662 containerd[1431]: 2025-07-10 00:37:58.030 [INFO][4796] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:37:58.055662 containerd[1431]: 2025-07-10 00:37:58.030 [INFO][4796] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:37:58.055662 containerd[1431]: 2025-07-10 00:37:58.042 [WARNING][4796] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" HandleID="k8s-pod-network.a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" Workload="localhost-k8s-coredns--7c65d6cfc9--rsqvw-eth0" Jul 10 00:37:58.055662 containerd[1431]: 2025-07-10 00:37:58.042 [INFO][4796] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" HandleID="k8s-pod-network.a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" Workload="localhost-k8s-coredns--7c65d6cfc9--rsqvw-eth0" Jul 10 00:37:58.055662 containerd[1431]: 2025-07-10 00:37:58.044 [INFO][4796] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:37:58.055662 containerd[1431]: 2025-07-10 00:37:58.048 [INFO][4770] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" Jul 10 00:37:58.056372 containerd[1431]: time="2025-07-10T00:37:58.056202028Z" level=info msg="TearDown network for sandbox \"a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25\" successfully" Jul 10 00:37:58.056372 containerd[1431]: time="2025-07-10T00:37:58.056237988Z" level=info msg="StopPodSandbox for \"a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25\" returns successfully" Jul 10 00:37:58.057289 kubelet[2453]: E0710 00:37:58.056590 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:58.058417 containerd[1431]: time="2025-07-10T00:37:58.058031190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rsqvw,Uid:7c3c9fa5-4474-4544-97e7-30e66ba1f67c,Namespace:kube-system,Attempt:1,}" Jul 10 00:37:58.059569 systemd[1]: run-netns-cni\x2d2eb819a3\x2d0ce4\x2dde75\x2da9e3\x2d471779756af9.mount: Deactivated successfully. Jul 10 00:37:58.071892 containerd[1431]: 2025-07-10 00:37:57.990 [INFO][4757] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" Jul 10 00:37:58.071892 containerd[1431]: 2025-07-10 00:37:57.990 [INFO][4757] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" iface="eth0" netns="/var/run/netns/cni-9c93896e-e27a-b469-9d25-ded374bad513" Jul 10 00:37:58.071892 containerd[1431]: 2025-07-10 00:37:57.992 [INFO][4757] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" iface="eth0" netns="/var/run/netns/cni-9c93896e-e27a-b469-9d25-ded374bad513" Jul 10 00:37:58.071892 containerd[1431]: 2025-07-10 00:37:57.992 [INFO][4757] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" iface="eth0" netns="/var/run/netns/cni-9c93896e-e27a-b469-9d25-ded374bad513" Jul 10 00:37:58.071892 containerd[1431]: 2025-07-10 00:37:57.993 [INFO][4757] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" Jul 10 00:37:58.071892 containerd[1431]: 2025-07-10 00:37:57.993 [INFO][4757] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" Jul 10 00:37:58.071892 containerd[1431]: 2025-07-10 00:37:58.034 [INFO][4804] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" HandleID="k8s-pod-network.10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" Workload="localhost-k8s-goldmane--58fd7646b9--sqpvz-eth0" Jul 10 00:37:58.071892 containerd[1431]: 2025-07-10 00:37:58.034 [INFO][4804] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:37:58.071892 containerd[1431]: 2025-07-10 00:37:58.044 [INFO][4804] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:37:58.071892 containerd[1431]: 2025-07-10 00:37:58.056 [WARNING][4804] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" HandleID="k8s-pod-network.10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" Workload="localhost-k8s-goldmane--58fd7646b9--sqpvz-eth0" Jul 10 00:37:58.071892 containerd[1431]: 2025-07-10 00:37:58.056 [INFO][4804] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" HandleID="k8s-pod-network.10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" Workload="localhost-k8s-goldmane--58fd7646b9--sqpvz-eth0" Jul 10 00:37:58.071892 containerd[1431]: 2025-07-10 00:37:58.062 [INFO][4804] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:37:58.071892 containerd[1431]: 2025-07-10 00:37:58.066 [INFO][4757] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" Jul 10 00:37:58.072830 containerd[1431]: time="2025-07-10T00:37:58.072799405Z" level=info msg="TearDown network for sandbox \"10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3\" successfully" Jul 10 00:37:58.073167 containerd[1431]: time="2025-07-10T00:37:58.073142846Z" level=info msg="StopPodSandbox for \"10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3\" returns successfully" Jul 10 00:37:58.074145 containerd[1431]: time="2025-07-10T00:37:58.074105767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-sqpvz,Uid:b35917b6-dff4-49b2-b380-4a0514f6d1e8,Namespace:calico-system,Attempt:1,}" Jul 10 00:37:58.075224 systemd[1]: run-netns-cni\x2d9c93896e\x2de27a\x2db469\x2d9d25\x2dded374bad513.mount: Deactivated successfully. Jul 10 00:37:58.095075 containerd[1431]: 2025-07-10 00:37:57.993 [INFO][4756] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" Jul 10 00:37:58.095075 containerd[1431]: 2025-07-10 00:37:57.993 [INFO][4756] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" iface="eth0" netns="/var/run/netns/cni-7d53dc57-8e6b-4be1-31ff-cd084348df9f" Jul 10 00:37:58.095075 containerd[1431]: 2025-07-10 00:37:57.994 [INFO][4756] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" iface="eth0" netns="/var/run/netns/cni-7d53dc57-8e6b-4be1-31ff-cd084348df9f" Jul 10 00:37:58.095075 containerd[1431]: 2025-07-10 00:37:57.994 [INFO][4756] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" iface="eth0" netns="/var/run/netns/cni-7d53dc57-8e6b-4be1-31ff-cd084348df9f" Jul 10 00:37:58.095075 containerd[1431]: 2025-07-10 00:37:57.995 [INFO][4756] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" Jul 10 00:37:58.095075 containerd[1431]: 2025-07-10 00:37:57.995 [INFO][4756] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" Jul 10 00:37:58.095075 containerd[1431]: 2025-07-10 00:37:58.044 [INFO][4803] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" HandleID="k8s-pod-network.67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" Workload="localhost-k8s-calico--kube--controllers--55cdd86887--d59d2-eth0" Jul 10 00:37:58.095075 containerd[1431]: 2025-07-10 00:37:58.045 [INFO][4803] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:37:58.095075 containerd[1431]: 2025-07-10 00:37:58.062 [INFO][4803] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:37:58.095075 containerd[1431]: 2025-07-10 00:37:58.080 [WARNING][4803] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" HandleID="k8s-pod-network.67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" Workload="localhost-k8s-calico--kube--controllers--55cdd86887--d59d2-eth0" Jul 10 00:37:58.095075 containerd[1431]: 2025-07-10 00:37:58.080 [INFO][4803] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" HandleID="k8s-pod-network.67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" Workload="localhost-k8s-calico--kube--controllers--55cdd86887--d59d2-eth0" Jul 10 00:37:58.095075 containerd[1431]: 2025-07-10 00:37:58.082 [INFO][4803] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:37:58.095075 containerd[1431]: 2025-07-10 00:37:58.090 [INFO][4756] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" Jul 10 00:37:58.096456 containerd[1431]: time="2025-07-10T00:37:58.096422870Z" level=info msg="TearDown network for sandbox \"67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0\" successfully" Jul 10 00:37:58.096510 containerd[1431]: time="2025-07-10T00:37:58.096463030Z" level=info msg="StopPodSandbox for \"67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0\" returns successfully" Jul 10 00:37:58.098933 containerd[1431]: time="2025-07-10T00:37:58.098411952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55cdd86887-d59d2,Uid:50dbd3c7-db9e-475c-b96d-679203b54cc6,Namespace:calico-system,Attempt:1,}" Jul 10 00:37:58.105151 systemd[1]: run-netns-cni\x2d7d53dc57\x2d8e6b\x2d4be1\x2d31ff\x2dcd084348df9f.mount: Deactivated successfully. Jul 10 00:37:58.134153 containerd[1431]: 2025-07-10 00:37:58.029 [INFO][4769] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" Jul 10 00:37:58.134153 containerd[1431]: 2025-07-10 00:37:58.029 [INFO][4769] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" iface="eth0" netns="/var/run/netns/cni-2e6dd462-a197-d571-61db-9b8e897cc3dd" Jul 10 00:37:58.134153 containerd[1431]: 2025-07-10 00:37:58.030 [INFO][4769] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" iface="eth0" netns="/var/run/netns/cni-2e6dd462-a197-d571-61db-9b8e897cc3dd" Jul 10 00:37:58.134153 containerd[1431]: 2025-07-10 00:37:58.031 [INFO][4769] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" iface="eth0" netns="/var/run/netns/cni-2e6dd462-a197-d571-61db-9b8e897cc3dd" Jul 10 00:37:58.134153 containerd[1431]: 2025-07-10 00:37:58.031 [INFO][4769] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" Jul 10 00:37:58.134153 containerd[1431]: 2025-07-10 00:37:58.031 [INFO][4769] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" Jul 10 00:37:58.134153 containerd[1431]: 2025-07-10 00:37:58.067 [INFO][4820] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" HandleID="k8s-pod-network.89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" Workload="localhost-k8s-csi--node--driver--qd5xx-eth0" Jul 10 00:37:58.134153 containerd[1431]: 2025-07-10 00:37:58.067 [INFO][4820] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:37:58.134153 containerd[1431]: 2025-07-10 00:37:58.082 [INFO][4820] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:37:58.134153 containerd[1431]: 2025-07-10 00:37:58.098 [WARNING][4820] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" HandleID="k8s-pod-network.89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" Workload="localhost-k8s-csi--node--driver--qd5xx-eth0" Jul 10 00:37:58.134153 containerd[1431]: 2025-07-10 00:37:58.098 [INFO][4820] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" HandleID="k8s-pod-network.89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" Workload="localhost-k8s-csi--node--driver--qd5xx-eth0" Jul 10 00:37:58.134153 containerd[1431]: 2025-07-10 00:37:58.110 [INFO][4820] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:37:58.134153 containerd[1431]: 2025-07-10 00:37:58.125 [INFO][4769] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" Jul 10 00:37:58.134893 containerd[1431]: time="2025-07-10T00:37:58.134208670Z" level=info msg="TearDown network for sandbox \"89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df\" successfully" Jul 10 00:37:58.134893 containerd[1431]: time="2025-07-10T00:37:58.134236390Z" level=info msg="StopPodSandbox for \"89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df\" returns successfully" Jul 10 00:37:58.135151 containerd[1431]: time="2025-07-10T00:37:58.135120271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qd5xx,Uid:3b36436a-7d97-4120-92b3-49bbe1e5480c,Namespace:calico-system,Attempt:1,}" Jul 10 00:37:58.177962 kubelet[2453]: E0710 00:37:58.177724 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:58.195827 kubelet[2453]: I0710 00:37:58.195456 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9b5f696fd-429ml" podStartSLOduration=25.414580551 podStartE2EDuration="27.195439815s" podCreationTimestamp="2025-07-10 00:37:31 +0000 UTC" firstStartedPulling="2025-07-10 00:37:55.335409909 +0000 UTC m=+38.520513937" lastFinishedPulling="2025-07-10 00:37:57.116269173 +0000 UTC m=+40.301373201" observedRunningTime="2025-07-10 00:37:58.19115449 +0000 UTC m=+41.376258518" watchObservedRunningTime="2025-07-10 00:37:58.195439815 +0000 UTC m=+41.380543843" Jul 10 00:37:58.274847 systemd-networkd[1370]: cali8f8589d2044: Link UP Jul 10 00:37:58.276954 systemd-networkd[1370]: cali8f8589d2044: Gained carrier Jul 10 00:37:58.287938 containerd[1431]: 2025-07-10 00:37:58.140 [INFO][4828] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--rsqvw-eth0 coredns-7c65d6cfc9- kube-system 7c3c9fa5-4474-4544-97e7-30e66ba1f67c 990 0 2025-07-10 00:37:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-rsqvw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8f8589d2044 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a98852a8761daa9d4f556421448ec56845dd7d3e5b419c738fcf15ae90476661" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rsqvw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rsqvw-" Jul 10 00:37:58.287938 containerd[1431]: 2025-07-10 00:37:58.140 [INFO][4828] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a98852a8761daa9d4f556421448ec56845dd7d3e5b419c738fcf15ae90476661" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rsqvw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rsqvw-eth0" Jul 10 00:37:58.287938 containerd[1431]: 2025-07-10 00:37:58.201 [INFO][4871] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a98852a8761daa9d4f556421448ec56845dd7d3e5b419c738fcf15ae90476661" HandleID="k8s-pod-network.a98852a8761daa9d4f556421448ec56845dd7d3e5b419c738fcf15ae90476661" Workload="localhost-k8s-coredns--7c65d6cfc9--rsqvw-eth0" Jul 10 00:37:58.287938 containerd[1431]: 2025-07-10 00:37:58.202 [INFO][4871] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a98852a8761daa9d4f556421448ec56845dd7d3e5b419c738fcf15ae90476661" HandleID="k8s-pod-network.a98852a8761daa9d4f556421448ec56845dd7d3e5b419c738fcf15ae90476661" Workload="localhost-k8s-coredns--7c65d6cfc9--rsqvw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ce50), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-rsqvw", "timestamp":"2025-07-10 00:37:58.201259901 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:37:58.287938 containerd[1431]: 2025-07-10 00:37:58.202 [INFO][4871] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:37:58.287938 containerd[1431]: 2025-07-10 00:37:58.202 [INFO][4871] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:37:58.287938 containerd[1431]: 2025-07-10 00:37:58.202 [INFO][4871] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:37:58.287938 containerd[1431]: 2025-07-10 00:37:58.216 [INFO][4871] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a98852a8761daa9d4f556421448ec56845dd7d3e5b419c738fcf15ae90476661" host="localhost" Jul 10 00:37:58.287938 containerd[1431]: 2025-07-10 00:37:58.226 [INFO][4871] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:37:58.287938 containerd[1431]: 2025-07-10 00:37:58.231 [INFO][4871] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:37:58.287938 containerd[1431]: 2025-07-10 00:37:58.234 [INFO][4871] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:37:58.287938 containerd[1431]: 2025-07-10 00:37:58.236 [INFO][4871] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:37:58.287938 containerd[1431]: 2025-07-10 00:37:58.236 [INFO][4871] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a98852a8761daa9d4f556421448ec56845dd7d3e5b419c738fcf15ae90476661" host="localhost" Jul 10 00:37:58.287938 containerd[1431]: 2025-07-10 00:37:58.240 [INFO][4871] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a98852a8761daa9d4f556421448ec56845dd7d3e5b419c738fcf15ae90476661 Jul 10 00:37:58.287938 containerd[1431]: 2025-07-10 00:37:58.250 [INFO][4871] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a98852a8761daa9d4f556421448ec56845dd7d3e5b419c738fcf15ae90476661" host="localhost" Jul 10 00:37:58.287938 containerd[1431]: 2025-07-10 00:37:58.260 [INFO][4871] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a98852a8761daa9d4f556421448ec56845dd7d3e5b419c738fcf15ae90476661" host="localhost" Jul 10 00:37:58.287938 containerd[1431]: 2025-07-10 00:37:58.260 [INFO][4871] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a98852a8761daa9d4f556421448ec56845dd7d3e5b419c738fcf15ae90476661" host="localhost" Jul 10 00:37:58.287938 containerd[1431]: 2025-07-10 00:37:58.260 [INFO][4871] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:37:58.287938 containerd[1431]: 2025-07-10 00:37:58.260 [INFO][4871] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a98852a8761daa9d4f556421448ec56845dd7d3e5b419c738fcf15ae90476661" HandleID="k8s-pod-network.a98852a8761daa9d4f556421448ec56845dd7d3e5b419c738fcf15ae90476661" Workload="localhost-k8s-coredns--7c65d6cfc9--rsqvw-eth0" Jul 10 00:37:58.288526 containerd[1431]: 2025-07-10 00:37:58.266 [INFO][4828] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a98852a8761daa9d4f556421448ec56845dd7d3e5b419c738fcf15ae90476661" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rsqvw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rsqvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--rsqvw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7c3c9fa5-4474-4544-97e7-30e66ba1f67c", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 37, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-rsqvw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8f8589d2044", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:37:58.288526 containerd[1431]: 2025-07-10 00:37:58.266 [INFO][4828] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a98852a8761daa9d4f556421448ec56845dd7d3e5b419c738fcf15ae90476661" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rsqvw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rsqvw-eth0" Jul 10 00:37:58.288526 containerd[1431]: 2025-07-10 00:37:58.266 [INFO][4828] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f8589d2044 ContainerID="a98852a8761daa9d4f556421448ec56845dd7d3e5b419c738fcf15ae90476661" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rsqvw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rsqvw-eth0" Jul 10 00:37:58.288526 containerd[1431]: 2025-07-10 00:37:58.275 [INFO][4828] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a98852a8761daa9d4f556421448ec56845dd7d3e5b419c738fcf15ae90476661" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rsqvw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rsqvw-eth0" Jul 10 00:37:58.288526 containerd[1431]: 2025-07-10 00:37:58.276 [INFO][4828] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a98852a8761daa9d4f556421448ec56845dd7d3e5b419c738fcf15ae90476661" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rsqvw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rsqvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--rsqvw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7c3c9fa5-4474-4544-97e7-30e66ba1f67c", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 37, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a98852a8761daa9d4f556421448ec56845dd7d3e5b419c738fcf15ae90476661", Pod:"coredns-7c65d6cfc9-rsqvw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8f8589d2044", MAC:"fa:61:07:85:fc:8d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:37:58.288526 containerd[1431]: 2025-07-10 00:37:58.285 [INFO][4828] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a98852a8761daa9d4f556421448ec56845dd7d3e5b419c738fcf15ae90476661" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rsqvw" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rsqvw-eth0" Jul 10 00:37:58.341519 containerd[1431]: time="2025-07-10T00:37:58.341311689Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:37:58.341519 containerd[1431]: time="2025-07-10T00:37:58.341404089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:37:58.341519 containerd[1431]: time="2025-07-10T00:37:58.341425169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:58.342768 containerd[1431]: time="2025-07-10T00:37:58.341519329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:58.368535 systemd[1]: Started cri-containerd-a98852a8761daa9d4f556421448ec56845dd7d3e5b419c738fcf15ae90476661.scope - libcontainer container a98852a8761daa9d4f556421448ec56845dd7d3e5b419c738fcf15ae90476661. Jul 10 00:37:58.380500 systemd-networkd[1370]: calia220035d9ba: Link UP Jul 10 00:37:58.382141 systemd-networkd[1370]: calia220035d9ba: Gained carrier Jul 10 00:37:58.399157 containerd[1431]: 2025-07-10 00:37:58.164 [INFO][4840] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--sqpvz-eth0 goldmane-58fd7646b9- calico-system b35917b6-dff4-49b2-b380-4a0514f6d1e8 991 0 2025-07-10 00:37:36 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-sqpvz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia220035d9ba [] [] }} ContainerID="08b7a0d91afb866c44cde1dc32efbd6011a204b27501e6de6a3f5127edd7f21b" Namespace="calico-system" Pod="goldmane-58fd7646b9-sqpvz" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--sqpvz-" Jul 10 00:37:58.399157 containerd[1431]: 2025-07-10 00:37:58.165 [INFO][4840] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="08b7a0d91afb866c44cde1dc32efbd6011a204b27501e6de6a3f5127edd7f21b" Namespace="calico-system" Pod="goldmane-58fd7646b9-sqpvz" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--sqpvz-eth0" Jul 10 00:37:58.399157 containerd[1431]: 2025-07-10 00:37:58.216 [INFO][4894] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="08b7a0d91afb866c44cde1dc32efbd6011a204b27501e6de6a3f5127edd7f21b" HandleID="k8s-pod-network.08b7a0d91afb866c44cde1dc32efbd6011a204b27501e6de6a3f5127edd7f21b" Workload="localhost-k8s-goldmane--58fd7646b9--sqpvz-eth0" Jul 10 00:37:58.399157 containerd[1431]: 2025-07-10 00:37:58.216 [INFO][4894] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="08b7a0d91afb866c44cde1dc32efbd6011a204b27501e6de6a3f5127edd7f21b" HandleID="k8s-pod-network.08b7a0d91afb866c44cde1dc32efbd6011a204b27501e6de6a3f5127edd7f21b" Workload="localhost-k8s-goldmane--58fd7646b9--sqpvz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004da30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-sqpvz", "timestamp":"2025-07-10 00:37:58.216153157 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:37:58.399157 containerd[1431]: 2025-07-10 00:37:58.216 [INFO][4894] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:37:58.399157 containerd[1431]: 2025-07-10 00:37:58.260 [INFO][4894] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:37:58.399157 containerd[1431]: 2025-07-10 00:37:58.260 [INFO][4894] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:37:58.399157 containerd[1431]: 2025-07-10 00:37:58.324 [INFO][4894] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.08b7a0d91afb866c44cde1dc32efbd6011a204b27501e6de6a3f5127edd7f21b" host="localhost" Jul 10 00:37:58.399157 containerd[1431]: 2025-07-10 00:37:58.339 [INFO][4894] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:37:58.399157 containerd[1431]: 2025-07-10 00:37:58.348 [INFO][4894] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:37:58.399157 containerd[1431]: 2025-07-10 00:37:58.351 [INFO][4894] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:37:58.399157 containerd[1431]: 2025-07-10 00:37:58.354 [INFO][4894] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:37:58.399157 containerd[1431]: 2025-07-10 00:37:58.354 [INFO][4894] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.08b7a0d91afb866c44cde1dc32efbd6011a204b27501e6de6a3f5127edd7f21b" host="localhost" Jul 10 00:37:58.399157 containerd[1431]: 2025-07-10 00:37:58.356 [INFO][4894] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.08b7a0d91afb866c44cde1dc32efbd6011a204b27501e6de6a3f5127edd7f21b Jul 10 00:37:58.399157 containerd[1431]: 2025-07-10 00:37:58.364 [INFO][4894] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.08b7a0d91afb866c44cde1dc32efbd6011a204b27501e6de6a3f5127edd7f21b" host="localhost" Jul 10 00:37:58.399157 containerd[1431]: 2025-07-10 00:37:58.371 [INFO][4894] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.08b7a0d91afb866c44cde1dc32efbd6011a204b27501e6de6a3f5127edd7f21b" host="localhost" Jul 10 00:37:58.399157 containerd[1431]: 2025-07-10 00:37:58.371 [INFO][4894] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.08b7a0d91afb866c44cde1dc32efbd6011a204b27501e6de6a3f5127edd7f21b" host="localhost" Jul 10 00:37:58.399157 containerd[1431]: 2025-07-10 00:37:58.372 [INFO][4894] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:37:58.399157 containerd[1431]: 2025-07-10 00:37:58.372 [INFO][4894] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="08b7a0d91afb866c44cde1dc32efbd6011a204b27501e6de6a3f5127edd7f21b" HandleID="k8s-pod-network.08b7a0d91afb866c44cde1dc32efbd6011a204b27501e6de6a3f5127edd7f21b" Workload="localhost-k8s-goldmane--58fd7646b9--sqpvz-eth0" Jul 10 00:37:58.399729 containerd[1431]: 2025-07-10 00:37:58.377 [INFO][4840] cni-plugin/k8s.go 418: Populated endpoint ContainerID="08b7a0d91afb866c44cde1dc32efbd6011a204b27501e6de6a3f5127edd7f21b" Namespace="calico-system" Pod="goldmane-58fd7646b9-sqpvz" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--sqpvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--sqpvz-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"b35917b6-dff4-49b2-b380-4a0514f6d1e8", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 37, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-sqpvz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia220035d9ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:37:58.399729 containerd[1431]: 2025-07-10 00:37:58.377 [INFO][4840] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="08b7a0d91afb866c44cde1dc32efbd6011a204b27501e6de6a3f5127edd7f21b" Namespace="calico-system" Pod="goldmane-58fd7646b9-sqpvz" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--sqpvz-eth0" Jul 10 00:37:58.399729 containerd[1431]: 2025-07-10 00:37:58.377 [INFO][4840] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia220035d9ba ContainerID="08b7a0d91afb866c44cde1dc32efbd6011a204b27501e6de6a3f5127edd7f21b" Namespace="calico-system" Pod="goldmane-58fd7646b9-sqpvz" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--sqpvz-eth0" Jul 10 00:37:58.399729 containerd[1431]: 2025-07-10 00:37:58.381 [INFO][4840] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="08b7a0d91afb866c44cde1dc32efbd6011a204b27501e6de6a3f5127edd7f21b" Namespace="calico-system" Pod="goldmane-58fd7646b9-sqpvz" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--sqpvz-eth0" Jul 10 00:37:58.399729 containerd[1431]: 2025-07-10 00:37:58.384 [INFO][4840] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="08b7a0d91afb866c44cde1dc32efbd6011a204b27501e6de6a3f5127edd7f21b" Namespace="calico-system" Pod="goldmane-58fd7646b9-sqpvz" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--sqpvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--sqpvz-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"b35917b6-dff4-49b2-b380-4a0514f6d1e8", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 37, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"08b7a0d91afb866c44cde1dc32efbd6011a204b27501e6de6a3f5127edd7f21b", Pod:"goldmane-58fd7646b9-sqpvz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia220035d9ba", MAC:"a6:62:e6:8b:cd:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:37:58.399729 containerd[1431]: 2025-07-10 00:37:58.395 [INFO][4840] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="08b7a0d91afb866c44cde1dc32efbd6011a204b27501e6de6a3f5127edd7f21b" Namespace="calico-system" Pod="goldmane-58fd7646b9-sqpvz" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--sqpvz-eth0" Jul 10 00:37:58.400441 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:37:58.433696 containerd[1431]: time="2025-07-10T00:37:58.432538225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:37:58.433696 containerd[1431]: time="2025-07-10T00:37:58.432600745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:37:58.433696 containerd[1431]: time="2025-07-10T00:37:58.432615665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:58.433696 containerd[1431]: time="2025-07-10T00:37:58.432695745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:58.457815 systemd[1]: Started cri-containerd-08b7a0d91afb866c44cde1dc32efbd6011a204b27501e6de6a3f5127edd7f21b.scope - libcontainer container 08b7a0d91afb866c44cde1dc32efbd6011a204b27501e6de6a3f5127edd7f21b. Jul 10 00:37:58.463435 containerd[1431]: time="2025-07-10T00:37:58.463311778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rsqvw,Uid:7c3c9fa5-4474-4544-97e7-30e66ba1f67c,Namespace:kube-system,Attempt:1,} returns sandbox id \"a98852a8761daa9d4f556421448ec56845dd7d3e5b419c738fcf15ae90476661\"" Jul 10 00:37:58.465002 kubelet[2453]: E0710 00:37:58.464977 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:58.468781 containerd[1431]: time="2025-07-10T00:37:58.468734263Z" level=info msg="CreateContainer within sandbox \"a98852a8761daa9d4f556421448ec56845dd7d3e5b419c738fcf15ae90476661\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:37:58.480316 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:37:58.483708 containerd[1431]: time="2025-07-10T00:37:58.483167438Z" level=info msg="CreateContainer within sandbox \"a98852a8761daa9d4f556421448ec56845dd7d3e5b419c738fcf15ae90476661\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2fb424cf18eafe1db14cfb605aa310cbe5352791c5271d9dfb9e2a57b290889f\"" Jul 10 00:37:58.483901 systemd-networkd[1370]: cali3e4f83a4738: Link UP Jul 10 00:37:58.484091 systemd-networkd[1370]: cali3e4f83a4738: Gained carrier Jul 10 00:37:58.485682 containerd[1431]: time="2025-07-10T00:37:58.485639201Z" level=info msg="StartContainer for \"2fb424cf18eafe1db14cfb605aa310cbe5352791c5271d9dfb9e2a57b290889f\"" Jul 10 00:37:58.509675 containerd[1431]: 2025-07-10 00:37:58.189 [INFO][4855] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--55cdd86887--d59d2-eth0 calico-kube-controllers-55cdd86887- calico-system 50dbd3c7-db9e-475c-b96d-679203b54cc6 992 0 2025-07-10 00:37:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:55cdd86887 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-55cdd86887-d59d2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3e4f83a4738 [] [] }} ContainerID="ee5fb145f0702fa55c528df019ddc3d6b9030c858038d8393c8fc46640a93360" Namespace="calico-system" Pod="calico-kube-controllers-55cdd86887-d59d2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55cdd86887--d59d2-" Jul 10 00:37:58.509675 containerd[1431]: 2025-07-10 00:37:58.189 [INFO][4855] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ee5fb145f0702fa55c528df019ddc3d6b9030c858038d8393c8fc46640a93360" Namespace="calico-system" Pod="calico-kube-controllers-55cdd86887-d59d2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55cdd86887--d59d2-eth0" Jul 10 00:37:58.509675 containerd[1431]: 2025-07-10 00:37:58.241 [INFO][4905] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee5fb145f0702fa55c528df019ddc3d6b9030c858038d8393c8fc46640a93360" HandleID="k8s-pod-network.ee5fb145f0702fa55c528df019ddc3d6b9030c858038d8393c8fc46640a93360" Workload="localhost-k8s-calico--kube--controllers--55cdd86887--d59d2-eth0" Jul 10 00:37:58.509675 containerd[1431]: 2025-07-10 00:37:58.242 [INFO][4905] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ee5fb145f0702fa55c528df019ddc3d6b9030c858038d8393c8fc46640a93360" HandleID="k8s-pod-network.ee5fb145f0702fa55c528df019ddc3d6b9030c858038d8393c8fc46640a93360" Workload="localhost-k8s-calico--kube--controllers--55cdd86887--d59d2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001b0e30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-55cdd86887-d59d2", "timestamp":"2025-07-10 00:37:58.241956504 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:37:58.509675 containerd[1431]: 2025-07-10 00:37:58.243 [INFO][4905] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:37:58.509675 containerd[1431]: 2025-07-10 00:37:58.372 [INFO][4905] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:37:58.509675 containerd[1431]: 2025-07-10 00:37:58.372 [INFO][4905] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:37:58.509675 containerd[1431]: 2025-07-10 00:37:58.414 [INFO][4905] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ee5fb145f0702fa55c528df019ddc3d6b9030c858038d8393c8fc46640a93360" host="localhost" Jul 10 00:37:58.509675 containerd[1431]: 2025-07-10 00:37:58.437 [INFO][4905] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:37:58.509675 containerd[1431]: 2025-07-10 00:37:58.453 [INFO][4905] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:37:58.509675 containerd[1431]: 2025-07-10 00:37:58.455 [INFO][4905] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:37:58.509675 containerd[1431]: 2025-07-10 00:37:58.459 [INFO][4905] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:37:58.509675 containerd[1431]: 2025-07-10 00:37:58.459 [INFO][4905] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ee5fb145f0702fa55c528df019ddc3d6b9030c858038d8393c8fc46640a93360" host="localhost" Jul 10 00:37:58.509675 containerd[1431]: 2025-07-10 00:37:58.461 [INFO][4905] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ee5fb145f0702fa55c528df019ddc3d6b9030c858038d8393c8fc46640a93360 Jul 10 00:37:58.509675 containerd[1431]: 2025-07-10 00:37:58.469 [INFO][4905] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ee5fb145f0702fa55c528df019ddc3d6b9030c858038d8393c8fc46640a93360" host="localhost" Jul 10 00:37:58.509675 containerd[1431]: 2025-07-10 00:37:58.477 [INFO][4905] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ee5fb145f0702fa55c528df019ddc3d6b9030c858038d8393c8fc46640a93360" host="localhost" Jul 10 00:37:58.509675 containerd[1431]: 2025-07-10 00:37:58.477 [INFO][4905] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ee5fb145f0702fa55c528df019ddc3d6b9030c858038d8393c8fc46640a93360" host="localhost" Jul 10 00:37:58.509675 containerd[1431]: 2025-07-10 00:37:58.477 [INFO][4905] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:37:58.509675 containerd[1431]: 2025-07-10 00:37:58.477 [INFO][4905] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ee5fb145f0702fa55c528df019ddc3d6b9030c858038d8393c8fc46640a93360" HandleID="k8s-pod-network.ee5fb145f0702fa55c528df019ddc3d6b9030c858038d8393c8fc46640a93360" Workload="localhost-k8s-calico--kube--controllers--55cdd86887--d59d2-eth0" Jul 10 00:37:58.511801 containerd[1431]: 2025-07-10 00:37:58.481 [INFO][4855] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ee5fb145f0702fa55c528df019ddc3d6b9030c858038d8393c8fc46640a93360" Namespace="calico-system" Pod="calico-kube-controllers-55cdd86887-d59d2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55cdd86887--d59d2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55cdd86887--d59d2-eth0", GenerateName:"calico-kube-controllers-55cdd86887-", Namespace:"calico-system", SelfLink:"", UID:"50dbd3c7-db9e-475c-b96d-679203b54cc6", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 37, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55cdd86887", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-55cdd86887-d59d2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3e4f83a4738", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:37:58.511801 containerd[1431]: 2025-07-10 00:37:58.481 [INFO][4855] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ee5fb145f0702fa55c528df019ddc3d6b9030c858038d8393c8fc46640a93360" Namespace="calico-system" Pod="calico-kube-controllers-55cdd86887-d59d2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55cdd86887--d59d2-eth0" Jul 10 00:37:58.511801 containerd[1431]: 2025-07-10 00:37:58.481 [INFO][4855] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3e4f83a4738 ContainerID="ee5fb145f0702fa55c528df019ddc3d6b9030c858038d8393c8fc46640a93360" Namespace="calico-system" Pod="calico-kube-controllers-55cdd86887-d59d2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55cdd86887--d59d2-eth0" Jul 10 00:37:58.511801 containerd[1431]: 2025-07-10 00:37:58.485 [INFO][4855] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee5fb145f0702fa55c528df019ddc3d6b9030c858038d8393c8fc46640a93360" Namespace="calico-system" Pod="calico-kube-controllers-55cdd86887-d59d2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55cdd86887--d59d2-eth0" Jul 10 00:37:58.511801 containerd[1431]: 2025-07-10 00:37:58.490 [INFO][4855] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ee5fb145f0702fa55c528df019ddc3d6b9030c858038d8393c8fc46640a93360" Namespace="calico-system" Pod="calico-kube-controllers-55cdd86887-d59d2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55cdd86887--d59d2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55cdd86887--d59d2-eth0", GenerateName:"calico-kube-controllers-55cdd86887-", Namespace:"calico-system", SelfLink:"", UID:"50dbd3c7-db9e-475c-b96d-679203b54cc6", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 37, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55cdd86887", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ee5fb145f0702fa55c528df019ddc3d6b9030c858038d8393c8fc46640a93360", Pod:"calico-kube-controllers-55cdd86887-d59d2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3e4f83a4738", MAC:"7a:f5:68:b9:76:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:37:58.511801 containerd[1431]: 2025-07-10 00:37:58.503 [INFO][4855] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ee5fb145f0702fa55c528df019ddc3d6b9030c858038d8393c8fc46640a93360" Namespace="calico-system" Pod="calico-kube-controllers-55cdd86887-d59d2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55cdd86887--d59d2-eth0" Jul 10 00:37:58.532542 systemd[1]: Started cri-containerd-2fb424cf18eafe1db14cfb605aa310cbe5352791c5271d9dfb9e2a57b290889f.scope - libcontainer container 2fb424cf18eafe1db14cfb605aa310cbe5352791c5271d9dfb9e2a57b290889f. Jul 10 00:37:58.538137 containerd[1431]: time="2025-07-10T00:37:58.538028736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:37:58.538289 containerd[1431]: time="2025-07-10T00:37:58.538118817Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:37:58.538289 containerd[1431]: time="2025-07-10T00:37:58.538137297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:58.538289 containerd[1431]: time="2025-07-10T00:37:58.538235897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:58.539033 containerd[1431]: time="2025-07-10T00:37:58.538998817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-sqpvz,Uid:b35917b6-dff4-49b2-b380-4a0514f6d1e8,Namespace:calico-system,Attempt:1,} returns sandbox id \"08b7a0d91afb866c44cde1dc32efbd6011a204b27501e6de6a3f5127edd7f21b\"" Jul 10 00:37:58.541682 containerd[1431]: time="2025-07-10T00:37:58.541632100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 10 00:37:58.560516 systemd[1]: Started cri-containerd-ee5fb145f0702fa55c528df019ddc3d6b9030c858038d8393c8fc46640a93360.scope - libcontainer container ee5fb145f0702fa55c528df019ddc3d6b9030c858038d8393c8fc46640a93360. Jul 10 00:37:58.577843 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:37:58.583521 systemd-networkd[1370]: califeab87cf85f: Link UP Jul 10 00:37:58.583993 systemd-networkd[1370]: califeab87cf85f: Gained carrier Jul 10 00:37:58.607704 containerd[1431]: 2025-07-10 00:37:58.227 [INFO][4876] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--qd5xx-eth0 csi-node-driver- calico-system 3b36436a-7d97-4120-92b3-49bbe1e5480c 993 0 2025-07-10 00:37:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-qd5xx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califeab87cf85f [] [] }} ContainerID="612f23f8936b1fb90e664913425d463ccf10b73280875cdd6a5ed062d79bc3c5" Namespace="calico-system" Pod="csi-node-driver-qd5xx" WorkloadEndpoint="localhost-k8s-csi--node--driver--qd5xx-" Jul 10 00:37:58.607704 containerd[1431]: 2025-07-10 00:37:58.228 [INFO][4876] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="612f23f8936b1fb90e664913425d463ccf10b73280875cdd6a5ed062d79bc3c5" Namespace="calico-system" Pod="csi-node-driver-qd5xx" WorkloadEndpoint="localhost-k8s-csi--node--driver--qd5xx-eth0" Jul 10 00:37:58.607704 containerd[1431]: 2025-07-10 00:37:58.276 [INFO][4914] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="612f23f8936b1fb90e664913425d463ccf10b73280875cdd6a5ed062d79bc3c5" HandleID="k8s-pod-network.612f23f8936b1fb90e664913425d463ccf10b73280875cdd6a5ed062d79bc3c5" Workload="localhost-k8s-csi--node--driver--qd5xx-eth0" Jul 10 00:37:58.607704 containerd[1431]: 2025-07-10 00:37:58.277 [INFO][4914] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="612f23f8936b1fb90e664913425d463ccf10b73280875cdd6a5ed062d79bc3c5" HandleID="k8s-pod-network.612f23f8936b1fb90e664913425d463ccf10b73280875cdd6a5ed062d79bc3c5" Workload="localhost-k8s-csi--node--driver--qd5xx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005aab00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-qd5xx", "timestamp":"2025-07-10 00:37:58.276813981 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:37:58.607704 containerd[1431]: 2025-07-10 00:37:58.277 [INFO][4914] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:37:58.607704 containerd[1431]: 2025-07-10 00:37:58.477 [INFO][4914] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:37:58.607704 containerd[1431]: 2025-07-10 00:37:58.477 [INFO][4914] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:37:58.607704 containerd[1431]: 2025-07-10 00:37:58.520 [INFO][4914] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.612f23f8936b1fb90e664913425d463ccf10b73280875cdd6a5ed062d79bc3c5" host="localhost" Jul 10 00:37:58.607704 containerd[1431]: 2025-07-10 00:37:58.538 [INFO][4914] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:37:58.607704 containerd[1431]: 2025-07-10 00:37:58.549 [INFO][4914] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:37:58.607704 containerd[1431]: 2025-07-10 00:37:58.552 [INFO][4914] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:37:58.607704 containerd[1431]: 2025-07-10 00:37:58.554 [INFO][4914] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:37:58.607704 containerd[1431]: 2025-07-10 00:37:58.554 [INFO][4914] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.612f23f8936b1fb90e664913425d463ccf10b73280875cdd6a5ed062d79bc3c5" host="localhost" Jul 10 00:37:58.607704 containerd[1431]: 2025-07-10 00:37:58.556 [INFO][4914] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.612f23f8936b1fb90e664913425d463ccf10b73280875cdd6a5ed062d79bc3c5 Jul 10 00:37:58.607704 containerd[1431]: 2025-07-10 00:37:58.560 [INFO][4914] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.612f23f8936b1fb90e664913425d463ccf10b73280875cdd6a5ed062d79bc3c5" host="localhost" Jul 10 00:37:58.607704 containerd[1431]: 2025-07-10 00:37:58.570 [INFO][4914] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.612f23f8936b1fb90e664913425d463ccf10b73280875cdd6a5ed062d79bc3c5" host="localhost" Jul 10 00:37:58.607704 containerd[1431]: 2025-07-10 00:37:58.570 [INFO][4914] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.612f23f8936b1fb90e664913425d463ccf10b73280875cdd6a5ed062d79bc3c5" host="localhost" Jul 10 00:37:58.607704 containerd[1431]: 2025-07-10 00:37:58.570 [INFO][4914] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:37:58.607704 containerd[1431]: 2025-07-10 00:37:58.570 [INFO][4914] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="612f23f8936b1fb90e664913425d463ccf10b73280875cdd6a5ed062d79bc3c5" HandleID="k8s-pod-network.612f23f8936b1fb90e664913425d463ccf10b73280875cdd6a5ed062d79bc3c5" Workload="localhost-k8s-csi--node--driver--qd5xx-eth0" Jul 10 00:37:58.608224 containerd[1431]: 2025-07-10 00:37:58.578 [INFO][4876] cni-plugin/k8s.go 418: Populated endpoint ContainerID="612f23f8936b1fb90e664913425d463ccf10b73280875cdd6a5ed062d79bc3c5" Namespace="calico-system" Pod="csi-node-driver-qd5xx" WorkloadEndpoint="localhost-k8s-csi--node--driver--qd5xx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qd5xx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3b36436a-7d97-4120-92b3-49bbe1e5480c", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 37, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-qd5xx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califeab87cf85f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:37:58.608224 containerd[1431]: 2025-07-10 00:37:58.578 [INFO][4876] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="612f23f8936b1fb90e664913425d463ccf10b73280875cdd6a5ed062d79bc3c5" Namespace="calico-system" Pod="csi-node-driver-qd5xx" WorkloadEndpoint="localhost-k8s-csi--node--driver--qd5xx-eth0" Jul 10 00:37:58.608224 containerd[1431]: 2025-07-10 00:37:58.578 [INFO][4876] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califeab87cf85f ContainerID="612f23f8936b1fb90e664913425d463ccf10b73280875cdd6a5ed062d79bc3c5" Namespace="calico-system" Pod="csi-node-driver-qd5xx" WorkloadEndpoint="localhost-k8s-csi--node--driver--qd5xx-eth0" Jul 10 00:37:58.608224 containerd[1431]: 2025-07-10 00:37:58.583 [INFO][4876] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="612f23f8936b1fb90e664913425d463ccf10b73280875cdd6a5ed062d79bc3c5" Namespace="calico-system" Pod="csi-node-driver-qd5xx" WorkloadEndpoint="localhost-k8s-csi--node--driver--qd5xx-eth0" Jul 10 00:37:58.608224 containerd[1431]: 2025-07-10 00:37:58.584 [INFO][4876] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="612f23f8936b1fb90e664913425d463ccf10b73280875cdd6a5ed062d79bc3c5" Namespace="calico-system" Pod="csi-node-driver-qd5xx" WorkloadEndpoint="localhost-k8s-csi--node--driver--qd5xx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qd5xx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3b36436a-7d97-4120-92b3-49bbe1e5480c", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 37, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"612f23f8936b1fb90e664913425d463ccf10b73280875cdd6a5ed062d79bc3c5", Pod:"csi-node-driver-qd5xx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califeab87cf85f", MAC:"16:ab:23:c8:a3:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:37:58.608224 containerd[1431]: 2025-07-10 00:37:58.598 [INFO][4876] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="612f23f8936b1fb90e664913425d463ccf10b73280875cdd6a5ed062d79bc3c5" Namespace="calico-system" Pod="csi-node-driver-qd5xx" WorkloadEndpoint="localhost-k8s-csi--node--driver--qd5xx-eth0" Jul 10 00:37:58.623161 containerd[1431]: time="2025-07-10T00:37:58.623113466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55cdd86887-d59d2,Uid:50dbd3c7-db9e-475c-b96d-679203b54cc6,Namespace:calico-system,Attempt:1,} returns sandbox id \"ee5fb145f0702fa55c528df019ddc3d6b9030c858038d8393c8fc46640a93360\"" Jul 10 00:37:58.631994 containerd[1431]: time="2025-07-10T00:37:58.631633155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:37:58.631994 containerd[1431]: time="2025-07-10T00:37:58.631695235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:37:58.631994 containerd[1431]: time="2025-07-10T00:37:58.631710275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:58.631994 containerd[1431]: time="2025-07-10T00:37:58.631783555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:58.639444 containerd[1431]: time="2025-07-10T00:37:58.638826403Z" level=info msg="StartContainer for \"2fb424cf18eafe1db14cfb605aa310cbe5352791c5271d9dfb9e2a57b290889f\" returns successfully" Jul 10 00:37:58.655538 systemd[1]: Started cri-containerd-612f23f8936b1fb90e664913425d463ccf10b73280875cdd6a5ed062d79bc3c5.scope - libcontainer container 612f23f8936b1fb90e664913425d463ccf10b73280875cdd6a5ed062d79bc3c5. Jul 10 00:37:58.671850 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:37:58.683244 containerd[1431]: time="2025-07-10T00:37:58.683097090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qd5xx,Uid:3b36436a-7d97-4120-92b3-49bbe1e5480c,Namespace:calico-system,Attempt:1,} returns sandbox id \"612f23f8936b1fb90e664913425d463ccf10b73280875cdd6a5ed062d79bc3c5\"" Jul 10 00:37:58.909567 containerd[1431]: time="2025-07-10T00:37:58.909338688Z" level=info msg="StopPodSandbox for \"0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b\"" Jul 10 00:37:59.026632 containerd[1431]: 2025-07-10 00:37:58.971 [INFO][5173] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" Jul 10 00:37:59.026632 containerd[1431]: 2025-07-10 00:37:58.974 [INFO][5173] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" iface="eth0" netns="/var/run/netns/cni-99129ee1-d889-18eb-f182-6b1ed004e719" Jul 10 00:37:59.026632 containerd[1431]: 2025-07-10 00:37:58.974 [INFO][5173] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" iface="eth0" netns="/var/run/netns/cni-99129ee1-d889-18eb-f182-6b1ed004e719" Jul 10 00:37:59.026632 containerd[1431]: 2025-07-10 00:37:58.974 [INFO][5173] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" iface="eth0" netns="/var/run/netns/cni-99129ee1-d889-18eb-f182-6b1ed004e719" Jul 10 00:37:59.026632 containerd[1431]: 2025-07-10 00:37:58.975 [INFO][5173] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" Jul 10 00:37:59.026632 containerd[1431]: 2025-07-10 00:37:58.975 [INFO][5173] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" Jul 10 00:37:59.026632 containerd[1431]: 2025-07-10 00:37:59.007 [INFO][5181] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" HandleID="k8s-pod-network.0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" Workload="localhost-k8s-calico--apiserver--9b5f696fd--6xg9b-eth0" Jul 10 00:37:59.026632 containerd[1431]: 2025-07-10 00:37:59.007 [INFO][5181] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:37:59.026632 containerd[1431]: 2025-07-10 00:37:59.007 [INFO][5181] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:37:59.026632 containerd[1431]: 2025-07-10 00:37:59.018 [WARNING][5181] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" HandleID="k8s-pod-network.0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" Workload="localhost-k8s-calico--apiserver--9b5f696fd--6xg9b-eth0" Jul 10 00:37:59.026632 containerd[1431]: 2025-07-10 00:37:59.018 [INFO][5181] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" HandleID="k8s-pod-network.0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" Workload="localhost-k8s-calico--apiserver--9b5f696fd--6xg9b-eth0" Jul 10 00:37:59.026632 containerd[1431]: 2025-07-10 00:37:59.020 [INFO][5181] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:37:59.026632 containerd[1431]: 2025-07-10 00:37:59.023 [INFO][5173] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" Jul 10 00:37:59.027530 containerd[1431]: time="2025-07-10T00:37:59.027495051Z" level=info msg="TearDown network for sandbox \"0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b\" successfully" Jul 10 00:37:59.027570 containerd[1431]: time="2025-07-10T00:37:59.027531212Z" level=info msg="StopPodSandbox for \"0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b\" returns successfully" Jul 10 00:37:59.028203 containerd[1431]: time="2025-07-10T00:37:59.028167492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b5f696fd-6xg9b,Uid:e7ca86f0-bcb9-4f79-9b5e-0dc27b14ae85,Namespace:calico-apiserver,Attempt:1,}" Jul 10 00:37:59.067556 systemd[1]: run-netns-cni\x2d99129ee1\x2dd889\x2d18eb\x2df182\x2d6b1ed004e719.mount: Deactivated successfully. Jul 10 00:37:59.067647 systemd[1]: run-netns-cni\x2d2e6dd462\x2da197\x2dd571\x2d61db\x2d9b8e897cc3dd.mount: Deactivated successfully. Jul 10 00:37:59.165524 systemd-networkd[1370]: calib33295a3608: Link UP Jul 10 00:37:59.166345 systemd-networkd[1370]: calib33295a3608: Gained carrier Jul 10 00:37:59.182759 kubelet[2453]: E0710 00:37:59.182654 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:59.187969 kubelet[2453]: I0710 00:37:59.187948 2453 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:37:59.203282 containerd[1431]: 2025-07-10 00:37:59.086 [INFO][5191] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--9b5f696fd--6xg9b-eth0 calico-apiserver-9b5f696fd- calico-apiserver e7ca86f0-bcb9-4f79-9b5e-0dc27b14ae85 1026 0 2025-07-10 00:37:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9b5f696fd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-9b5f696fd-6xg9b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib33295a3608 [] [] }} ContainerID="2ca4ac7511891295855fdbd73df6f7be22ded661684d275822b843f0fc5a06e0" Namespace="calico-apiserver" Pod="calico-apiserver-9b5f696fd-6xg9b" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b5f696fd--6xg9b-" Jul 10 00:37:59.203282 containerd[1431]: 2025-07-10 00:37:59.086 [INFO][5191] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2ca4ac7511891295855fdbd73df6f7be22ded661684d275822b843f0fc5a06e0" Namespace="calico-apiserver" Pod="calico-apiserver-9b5f696fd-6xg9b" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b5f696fd--6xg9b-eth0" Jul 10 00:37:59.203282 containerd[1431]: 2025-07-10 00:37:59.116 [INFO][5204] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ca4ac7511891295855fdbd73df6f7be22ded661684d275822b843f0fc5a06e0" HandleID="k8s-pod-network.2ca4ac7511891295855fdbd73df6f7be22ded661684d275822b843f0fc5a06e0" Workload="localhost-k8s-calico--apiserver--9b5f696fd--6xg9b-eth0" Jul 10 00:37:59.203282 containerd[1431]: 2025-07-10 00:37:59.117 [INFO][5204] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2ca4ac7511891295855fdbd73df6f7be22ded661684d275822b843f0fc5a06e0" HandleID="k8s-pod-network.2ca4ac7511891295855fdbd73df6f7be22ded661684d275822b843f0fc5a06e0" Workload="localhost-k8s-calico--apiserver--9b5f696fd--6xg9b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001377a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-9b5f696fd-6xg9b", "timestamp":"2025-07-10 00:37:59.11695626 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:37:59.203282 containerd[1431]: 2025-07-10 00:37:59.117 [INFO][5204] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:37:59.203282 containerd[1431]: 2025-07-10 00:37:59.117 [INFO][5204] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:37:59.203282 containerd[1431]: 2025-07-10 00:37:59.117 [INFO][5204] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:37:59.203282 containerd[1431]: 2025-07-10 00:37:59.126 [INFO][5204] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2ca4ac7511891295855fdbd73df6f7be22ded661684d275822b843f0fc5a06e0" host="localhost" Jul 10 00:37:59.203282 containerd[1431]: 2025-07-10 00:37:59.131 [INFO][5204] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:37:59.203282 containerd[1431]: 2025-07-10 00:37:59.137 [INFO][5204] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:37:59.203282 containerd[1431]: 2025-07-10 00:37:59.139 [INFO][5204] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:37:59.203282 containerd[1431]: 2025-07-10 00:37:59.142 [INFO][5204] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:37:59.203282 containerd[1431]: 2025-07-10 00:37:59.142 [INFO][5204] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2ca4ac7511891295855fdbd73df6f7be22ded661684d275822b843f0fc5a06e0" host="localhost" Jul 10 00:37:59.203282 containerd[1431]: 2025-07-10 00:37:59.144 [INFO][5204] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2ca4ac7511891295855fdbd73df6f7be22ded661684d275822b843f0fc5a06e0 Jul 10 00:37:59.203282 containerd[1431]: 2025-07-10 00:37:59.149 [INFO][5204] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2ca4ac7511891295855fdbd73df6f7be22ded661684d275822b843f0fc5a06e0" host="localhost" Jul 10 00:37:59.203282 containerd[1431]: 2025-07-10 00:37:59.157 [INFO][5204] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.2ca4ac7511891295855fdbd73df6f7be22ded661684d275822b843f0fc5a06e0" host="localhost" Jul 10 00:37:59.203282 containerd[1431]: 2025-07-10 00:37:59.157 [INFO][5204] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.2ca4ac7511891295855fdbd73df6f7be22ded661684d275822b843f0fc5a06e0" host="localhost" Jul 10 00:37:59.203282 containerd[1431]: 2025-07-10 00:37:59.158 [INFO][5204] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:37:59.203282 containerd[1431]: 2025-07-10 00:37:59.158 [INFO][5204] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="2ca4ac7511891295855fdbd73df6f7be22ded661684d275822b843f0fc5a06e0" HandleID="k8s-pod-network.2ca4ac7511891295855fdbd73df6f7be22ded661684d275822b843f0fc5a06e0" Workload="localhost-k8s-calico--apiserver--9b5f696fd--6xg9b-eth0" Jul 10 00:37:59.204602 containerd[1431]: 2025-07-10 00:37:59.162 [INFO][5191] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2ca4ac7511891295855fdbd73df6f7be22ded661684d275822b843f0fc5a06e0" Namespace="calico-apiserver" Pod="calico-apiserver-9b5f696fd-6xg9b" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b5f696fd--6xg9b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9b5f696fd--6xg9b-eth0", GenerateName:"calico-apiserver-9b5f696fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"e7ca86f0-bcb9-4f79-9b5e-0dc27b14ae85", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 37, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b5f696fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-9b5f696fd-6xg9b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib33295a3608", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:37:59.204602 containerd[1431]: 2025-07-10 00:37:59.162 [INFO][5191] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="2ca4ac7511891295855fdbd73df6f7be22ded661684d275822b843f0fc5a06e0" Namespace="calico-apiserver" Pod="calico-apiserver-9b5f696fd-6xg9b" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b5f696fd--6xg9b-eth0" Jul 10 00:37:59.204602 containerd[1431]: 2025-07-10 00:37:59.163 [INFO][5191] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib33295a3608 ContainerID="2ca4ac7511891295855fdbd73df6f7be22ded661684d275822b843f0fc5a06e0" Namespace="calico-apiserver" Pod="calico-apiserver-9b5f696fd-6xg9b" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b5f696fd--6xg9b-eth0" Jul 10 00:37:59.204602 containerd[1431]: 2025-07-10 00:37:59.166 [INFO][5191] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ca4ac7511891295855fdbd73df6f7be22ded661684d275822b843f0fc5a06e0" Namespace="calico-apiserver" Pod="calico-apiserver-9b5f696fd-6xg9b" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b5f696fd--6xg9b-eth0" Jul 10 00:37:59.204602 containerd[1431]: 2025-07-10 00:37:59.167 [INFO][5191] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2ca4ac7511891295855fdbd73df6f7be22ded661684d275822b843f0fc5a06e0" Namespace="calico-apiserver" Pod="calico-apiserver-9b5f696fd-6xg9b" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b5f696fd--6xg9b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9b5f696fd--6xg9b-eth0", GenerateName:"calico-apiserver-9b5f696fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"e7ca86f0-bcb9-4f79-9b5e-0dc27b14ae85", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 37, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b5f696fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2ca4ac7511891295855fdbd73df6f7be22ded661684d275822b843f0fc5a06e0", Pod:"calico-apiserver-9b5f696fd-6xg9b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib33295a3608", MAC:"6e:08:64:0d:8b:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:37:59.204602 containerd[1431]: 2025-07-10 00:37:59.199 [INFO][5191] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2ca4ac7511891295855fdbd73df6f7be22ded661684d275822b843f0fc5a06e0" Namespace="calico-apiserver" Pod="calico-apiserver-9b5f696fd-6xg9b" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b5f696fd--6xg9b-eth0" Jul 10 00:37:59.209193 kubelet[2453]: I0710 00:37:59.209131 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-rsqvw" podStartSLOduration=37.208445351 podStartE2EDuration="37.208445351s" podCreationTimestamp="2025-07-10 00:37:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:37:59.20829003 +0000 UTC m=+42.393394058" watchObservedRunningTime="2025-07-10 00:37:59.208445351 +0000 UTC m=+42.393549339" Jul 10 00:37:59.240595 containerd[1431]: time="2025-07-10T00:37:59.240474102Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:37:59.240595 containerd[1431]: time="2025-07-10T00:37:59.240534502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:37:59.240595 containerd[1431]: time="2025-07-10T00:37:59.240552662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:59.240775 containerd[1431]: time="2025-07-10T00:37:59.240672502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:59.262520 systemd[1]: Started cri-containerd-2ca4ac7511891295855fdbd73df6f7be22ded661684d275822b843f0fc5a06e0.scope - libcontainer container 2ca4ac7511891295855fdbd73df6f7be22ded661684d275822b843f0fc5a06e0. Jul 10 00:37:59.276494 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:37:59.293929 containerd[1431]: time="2025-07-10T00:37:59.293883875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b5f696fd-6xg9b,Uid:e7ca86f0-bcb9-4f79-9b5e-0dc27b14ae85,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2ca4ac7511891295855fdbd73df6f7be22ded661684d275822b843f0fc5a06e0\"" Jul 10 00:37:59.297862 containerd[1431]: time="2025-07-10T00:37:59.297816839Z" level=info msg="CreateContainer within sandbox \"2ca4ac7511891295855fdbd73df6f7be22ded661684d275822b843f0fc5a06e0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 00:37:59.309696 containerd[1431]: time="2025-07-10T00:37:59.309651651Z" level=info msg="CreateContainer within sandbox \"2ca4ac7511891295855fdbd73df6f7be22ded661684d275822b843f0fc5a06e0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6dff4999b0e3ad916a10f0ab3e9a0995d2647d88451de1b2b9059e758b68b2c5\"" Jul 10 00:37:59.310381 containerd[1431]: time="2025-07-10T00:37:59.310229291Z" level=info msg="StartContainer for \"6dff4999b0e3ad916a10f0ab3e9a0995d2647d88451de1b2b9059e758b68b2c5\"" Jul 10 00:37:59.352547 systemd[1]: Started cri-containerd-6dff4999b0e3ad916a10f0ab3e9a0995d2647d88451de1b2b9059e758b68b2c5.scope - libcontainer container 6dff4999b0e3ad916a10f0ab3e9a0995d2647d88451de1b2b9059e758b68b2c5. Jul 10 00:37:59.380529 systemd-networkd[1370]: cali8f8589d2044: Gained IPv6LL Jul 10 00:37:59.401395 containerd[1431]: time="2025-07-10T00:37:59.401330942Z" level=info msg="StartContainer for \"6dff4999b0e3ad916a10f0ab3e9a0995d2647d88451de1b2b9059e758b68b2c5\" returns successfully" Jul 10 00:37:59.636500 systemd-networkd[1370]: calia220035d9ba: Gained IPv6LL Jul 10 00:37:59.700497 systemd-networkd[1370]: cali3e4f83a4738: Gained IPv6LL Jul 10 00:37:59.892473 systemd-networkd[1370]: califeab87cf85f: Gained IPv6LL Jul 10 00:38:00.191958 kubelet[2453]: E0710 00:38:00.191847 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:38:00.329822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount205035264.mount: Deactivated successfully. Jul 10 00:38:00.596573 systemd-networkd[1370]: calib33295a3608: Gained IPv6LL Jul 10 00:38:00.819373 containerd[1431]: time="2025-07-10T00:38:00.819309934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:38:00.819910 containerd[1431]: time="2025-07-10T00:38:00.819860175Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 10 00:38:00.820841 containerd[1431]: time="2025-07-10T00:38:00.820808376Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:38:00.823873 containerd[1431]: time="2025-07-10T00:38:00.823830499Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:38:00.824754 containerd[1431]: time="2025-07-10T00:38:00.824714860Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.283046679s" Jul 10 00:38:00.824803 containerd[1431]: time="2025-07-10T00:38:00.824754940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 10 00:38:00.826556 containerd[1431]: time="2025-07-10T00:38:00.826516901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 10 00:38:00.829140 containerd[1431]: time="2025-07-10T00:38:00.829103824Z" level=info msg="CreateContainer within sandbox \"08b7a0d91afb866c44cde1dc32efbd6011a204b27501e6de6a3f5127edd7f21b\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 10 00:38:00.847436 containerd[1431]: time="2025-07-10T00:38:00.847289520Z" level=info msg="CreateContainer within sandbox \"08b7a0d91afb866c44cde1dc32efbd6011a204b27501e6de6a3f5127edd7f21b\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"223d7465fbd372cf6b6d806aa0eda7791352ce5d4c417c9f153573f0c87fc7be\"" Jul 10 00:38:00.848093 containerd[1431]: time="2025-07-10T00:38:00.847940921Z" level=info msg="StartContainer for \"223d7465fbd372cf6b6d806aa0eda7791352ce5d4c417c9f153573f0c87fc7be\"" Jul 10 00:38:00.878578 systemd[1]: Started cri-containerd-223d7465fbd372cf6b6d806aa0eda7791352ce5d4c417c9f153573f0c87fc7be.scope - libcontainer container 223d7465fbd372cf6b6d806aa0eda7791352ce5d4c417c9f153573f0c87fc7be. Jul 10 00:38:00.927664 containerd[1431]: time="2025-07-10T00:38:00.927535915Z" level=info msg="StartContainer for \"223d7465fbd372cf6b6d806aa0eda7791352ce5d4c417c9f153573f0c87fc7be\" returns successfully" Jul 10 00:38:01.195708 kubelet[2453]: I0710 00:38:01.195541 2453 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:38:01.197447 kubelet[2453]: E0710 00:38:01.195915 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:38:01.209131 kubelet[2453]: I0710 00:38:01.209062 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9b5f696fd-6xg9b" podStartSLOduration=30.209041884 podStartE2EDuration="30.209041884s" podCreationTimestamp="2025-07-10 00:37:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:38:00.204861884 +0000 UTC m=+43.389965912" watchObservedRunningTime="2025-07-10 00:38:01.209041884 +0000 UTC m=+44.394145912" Jul 10 00:38:01.209597 kubelet[2453]: I0710 00:38:01.209355 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-sqpvz" podStartSLOduration=22.924184483 podStartE2EDuration="25.209349764s" podCreationTimestamp="2025-07-10 00:37:36 +0000 UTC" firstStartedPulling="2025-07-10 00:37:58.540516179 +0000 UTC m=+41.725620207" lastFinishedPulling="2025-07-10 00:38:00.82568146 +0000 UTC m=+44.010785488" observedRunningTime="2025-07-10 00:38:01.208649964 +0000 UTC m=+44.393754032" watchObservedRunningTime="2025-07-10 00:38:01.209349764 +0000 UTC m=+44.394453752" Jul 10 00:38:02.197053 kubelet[2453]: I0710 00:38:02.197008 2453 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:38:02.395276 systemd[1]: Started sshd@8-10.0.0.112:22-10.0.0.1:59624.service - OpenSSH per-connection server daemon (10.0.0.1:59624). Jul 10 00:38:02.471277 containerd[1431]: time="2025-07-10T00:38:02.470990156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:38:02.472189 containerd[1431]: time="2025-07-10T00:38:02.472142317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 10 00:38:02.472274 containerd[1431]: time="2025-07-10T00:38:02.472214197Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:38:02.476418 containerd[1431]: time="2025-07-10T00:38:02.475467200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:38:02.476418 containerd[1431]: time="2025-07-10T00:38:02.476223721Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 1.64967398s" Jul 10 00:38:02.476418 containerd[1431]: time="2025-07-10T00:38:02.476263321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 10 00:38:02.479783 containerd[1431]: time="2025-07-10T00:38:02.479739844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 10 00:38:02.494962 containerd[1431]: time="2025-07-10T00:38:02.494887336Z" level=info msg="CreateContainer within sandbox \"ee5fb145f0702fa55c528df019ddc3d6b9030c858038d8393c8fc46640a93360\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 10 00:38:02.514834 containerd[1431]: time="2025-07-10T00:38:02.514774352Z" level=info msg="CreateContainer within sandbox \"ee5fb145f0702fa55c528df019ddc3d6b9030c858038d8393c8fc46640a93360\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"36ea03b96cba762424a50878f3712fc29041e51c7d5e91aa69034e3a88a6393b\"" Jul 10 00:38:02.516753 containerd[1431]: time="2025-07-10T00:38:02.516719234Z" level=info msg="StartContainer for \"36ea03b96cba762424a50878f3712fc29041e51c7d5e91aa69034e3a88a6393b\"" Jul 10 00:38:02.518432 sshd[5372]: Accepted publickey for core from 10.0.0.1 port 59624 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:38:02.521498 sshd[5372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:38:02.529506 systemd-logind[1418]: New session 9 of user core. Jul 10 00:38:02.535577 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 00:38:02.559789 systemd[1]: Started cri-containerd-36ea03b96cba762424a50878f3712fc29041e51c7d5e91aa69034e3a88a6393b.scope - libcontainer container 36ea03b96cba762424a50878f3712fc29041e51c7d5e91aa69034e3a88a6393b. Jul 10 00:38:02.616685 containerd[1431]: time="2025-07-10T00:38:02.616631075Z" level=info msg="StartContainer for \"36ea03b96cba762424a50878f3712fc29041e51c7d5e91aa69034e3a88a6393b\" returns successfully" Jul 10 00:38:03.012422 sshd[5372]: pam_unix(sshd:session): session closed for user core Jul 10 00:38:03.015281 systemd[1]: sshd@8-10.0.0.112:22-10.0.0.1:59624.service: Deactivated successfully. Jul 10 00:38:03.018209 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 00:38:03.019788 systemd-logind[1418]: Session 9 logged out. Waiting for processes to exit. Jul 10 00:38:03.020995 systemd-logind[1418]: Removed session 9. Jul 10 00:38:03.549179 containerd[1431]: time="2025-07-10T00:38:03.549120528Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:38:03.549812 containerd[1431]: time="2025-07-10T00:38:03.549772128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 10 00:38:03.550593 containerd[1431]: time="2025-07-10T00:38:03.550561769Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:38:03.552935 containerd[1431]: time="2025-07-10T00:38:03.552889131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:38:03.554233 containerd[1431]: time="2025-07-10T00:38:03.554198492Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.074413568s" Jul 10 00:38:03.554298 containerd[1431]: time="2025-07-10T00:38:03.554241852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 10 00:38:03.558211 containerd[1431]: time="2025-07-10T00:38:03.558155455Z" level=info msg="CreateContainer within sandbox \"612f23f8936b1fb90e664913425d463ccf10b73280875cdd6a5ed062d79bc3c5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 10 00:38:03.571984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3131650896.mount: Deactivated successfully. Jul 10 00:38:03.573596 containerd[1431]: time="2025-07-10T00:38:03.573461786Z" level=info msg="CreateContainer within sandbox \"612f23f8936b1fb90e664913425d463ccf10b73280875cdd6a5ed062d79bc3c5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"50c4dbe607185e5e124efd322880e412784fcbca0d34af7f7ee2f1bc7a407534\"" Jul 10 00:38:03.574238 containerd[1431]: time="2025-07-10T00:38:03.574208907Z" level=info msg="StartContainer for \"50c4dbe607185e5e124efd322880e412784fcbca0d34af7f7ee2f1bc7a407534\"" Jul 10 00:38:03.613564 systemd[1]: Started cri-containerd-50c4dbe607185e5e124efd322880e412784fcbca0d34af7f7ee2f1bc7a407534.scope - libcontainer container 50c4dbe607185e5e124efd322880e412784fcbca0d34af7f7ee2f1bc7a407534. Jul 10 00:38:03.639969 containerd[1431]: time="2025-07-10T00:38:03.639907237Z" level=info msg="StartContainer for \"50c4dbe607185e5e124efd322880e412784fcbca0d34af7f7ee2f1bc7a407534\" returns successfully" Jul 10 00:38:03.641232 containerd[1431]: time="2025-07-10T00:38:03.641180198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 10 00:38:04.210467 kubelet[2453]: I0710 00:38:04.209981 2453 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:38:04.586852 containerd[1431]: time="2025-07-10T00:38:04.586796333Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:38:04.589101 containerd[1431]: time="2025-07-10T00:38:04.587596294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 10 00:38:04.617102 containerd[1431]: time="2025-07-10T00:38:04.617044555Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:38:04.619967 containerd[1431]: time="2025-07-10T00:38:04.619917117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:38:04.625743 containerd[1431]: time="2025-07-10T00:38:04.625480281Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 984.235483ms" Jul 10 00:38:04.625743 containerd[1431]: time="2025-07-10T00:38:04.625537201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 10 00:38:04.629705 containerd[1431]: time="2025-07-10T00:38:04.629662644Z" level=info msg="CreateContainer within sandbox \"612f23f8936b1fb90e664913425d463ccf10b73280875cdd6a5ed062d79bc3c5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 10 00:38:04.634244 kubelet[2453]: I0710 00:38:04.633773 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-55cdd86887-d59d2" podStartSLOduration=23.780808433 podStartE2EDuration="27.633667127s" podCreationTimestamp="2025-07-10 00:37:37 +0000 UTC" firstStartedPulling="2025-07-10 00:37:58.626020229 +0000 UTC m=+41.811124217" lastFinishedPulling="2025-07-10 00:38:02.478878883 +0000 UTC m=+45.663982911" observedRunningTime="2025-07-10 00:38:03.217079314 +0000 UTC m=+46.402183342" watchObservedRunningTime="2025-07-10 00:38:04.633667127 +0000 UTC m=+47.818771155" Jul 10 00:38:04.651497 systemd[1]: run-containerd-runc-k8s.io-36ea03b96cba762424a50878f3712fc29041e51c7d5e91aa69034e3a88a6393b-runc.MpTX4S.mount: Deactivated successfully. Jul 10 00:38:04.655199 containerd[1431]: time="2025-07-10T00:38:04.652680021Z" level=info msg="CreateContainer within sandbox \"612f23f8936b1fb90e664913425d463ccf10b73280875cdd6a5ed062d79bc3c5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d70f495607b14973654a5eecb1a738f9865c761d4d9e5bb9a4a6d740c20c99c0\"" Jul 10 00:38:04.655199 containerd[1431]: time="2025-07-10T00:38:04.653338941Z" level=info msg="StartContainer for \"d70f495607b14973654a5eecb1a738f9865c761d4d9e5bb9a4a6d740c20c99c0\"" Jul 10 00:38:04.693583 systemd[1]: Started cri-containerd-d70f495607b14973654a5eecb1a738f9865c761d4d9e5bb9a4a6d740c20c99c0.scope - libcontainer container d70f495607b14973654a5eecb1a738f9865c761d4d9e5bb9a4a6d740c20c99c0. Jul 10 00:38:04.723603 containerd[1431]: time="2025-07-10T00:38:04.723561631Z" level=info msg="StartContainer for \"d70f495607b14973654a5eecb1a738f9865c761d4d9e5bb9a4a6d740c20c99c0\" returns successfully" Jul 10 00:38:05.010668 kubelet[2453]: I0710 00:38:05.010615 2453 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 10 00:38:05.014284 kubelet[2453]: I0710 00:38:05.014249 2453 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 10 00:38:05.226766 kubelet[2453]: I0710 00:38:05.226704 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qd5xx" podStartSLOduration=22.284044231 podStartE2EDuration="28.226686822s" podCreationTimestamp="2025-07-10 00:37:37 +0000 UTC" firstStartedPulling="2025-07-10 00:37:58.684597531 +0000 UTC m=+41.869701559" lastFinishedPulling="2025-07-10 00:38:04.627240122 +0000 UTC m=+47.812344150" observedRunningTime="2025-07-10 00:38:05.226280822 +0000 UTC m=+48.411384850" watchObservedRunningTime="2025-07-10 00:38:05.226686822 +0000 UTC m=+48.411790850" Jul 10 00:38:05.821390 kubelet[2453]: I0710 00:38:05.821300 2453 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:38:08.028646 systemd[1]: Started sshd@9-10.0.0.112:22-10.0.0.1:52502.service - OpenSSH per-connection server daemon (10.0.0.1:52502). Jul 10 00:38:08.076294 sshd[5569]: Accepted publickey for core from 10.0.0.1 port 52502 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:38:08.078017 sshd[5569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:38:08.083757 systemd-logind[1418]: New session 10 of user core. Jul 10 00:38:08.091570 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 00:38:08.380408 sshd[5569]: pam_unix(sshd:session): session closed for user core Jul 10 00:38:08.389102 systemd[1]: sshd@9-10.0.0.112:22-10.0.0.1:52502.service: Deactivated successfully. Jul 10 00:38:08.391383 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 00:38:08.393259 systemd-logind[1418]: Session 10 logged out. Waiting for processes to exit. Jul 10 00:38:08.399740 systemd[1]: Started sshd@10-10.0.0.112:22-10.0.0.1:52518.service - OpenSSH per-connection server daemon (10.0.0.1:52518). Jul 10 00:38:08.401873 systemd-logind[1418]: Removed session 10. Jul 10 00:38:08.430660 sshd[5589]: Accepted publickey for core from 10.0.0.1 port 52518 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:38:08.432078 sshd[5589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:38:08.436071 systemd-logind[1418]: New session 11 of user core. Jul 10 00:38:08.442550 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 00:38:08.655648 sshd[5589]: pam_unix(sshd:session): session closed for user core Jul 10 00:38:08.671606 systemd[1]: sshd@10-10.0.0.112:22-10.0.0.1:52518.service: Deactivated successfully. Jul 10 00:38:08.676204 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 00:38:08.681072 systemd-logind[1418]: Session 11 logged out. Waiting for processes to exit. Jul 10 00:38:08.688632 systemd[1]: Started sshd@11-10.0.0.112:22-10.0.0.1:52522.service - OpenSSH per-connection server daemon (10.0.0.1:52522). Jul 10 00:38:08.689812 systemd-logind[1418]: Removed session 11. Jul 10 00:38:08.722876 sshd[5602]: Accepted publickey for core from 10.0.0.1 port 52522 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:38:08.724415 sshd[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:38:08.728901 systemd-logind[1418]: New session 12 of user core. Jul 10 00:38:08.742539 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 00:38:08.874726 sshd[5602]: pam_unix(sshd:session): session closed for user core Jul 10 00:38:08.878523 systemd[1]: sshd@11-10.0.0.112:22-10.0.0.1:52522.service: Deactivated successfully. Jul 10 00:38:08.880169 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 00:38:08.881945 systemd-logind[1418]: Session 12 logged out. Waiting for processes to exit. Jul 10 00:38:08.882846 systemd-logind[1418]: Removed session 12. Jul 10 00:38:09.906052 kubelet[2453]: I0710 00:38:09.905699 2453 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:38:13.913764 systemd[1]: Started sshd@12-10.0.0.112:22-10.0.0.1:37226.service - OpenSSH per-connection server daemon (10.0.0.1:37226). Jul 10 00:38:13.953188 sshd[5687]: Accepted publickey for core from 10.0.0.1 port 37226 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:38:13.954893 sshd[5687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:38:13.959088 systemd-logind[1418]: New session 13 of user core. Jul 10 00:38:13.970571 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 00:38:14.150837 sshd[5687]: pam_unix(sshd:session): session closed for user core Jul 10 00:38:14.161738 systemd[1]: sshd@12-10.0.0.112:22-10.0.0.1:37226.service: Deactivated successfully. Jul 10 00:38:14.164304 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 00:38:14.166087 systemd-logind[1418]: Session 13 logged out. Waiting for processes to exit. Jul 10 00:38:14.179110 systemd[1]: Started sshd@13-10.0.0.112:22-10.0.0.1:37242.service - OpenSSH per-connection server daemon (10.0.0.1:37242). Jul 10 00:38:14.183872 systemd-logind[1418]: Removed session 13. Jul 10 00:38:14.214787 sshd[5701]: Accepted publickey for core from 10.0.0.1 port 37242 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:38:14.216230 sshd[5701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:38:14.220981 systemd-logind[1418]: New session 14 of user core. Jul 10 00:38:14.230558 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 00:38:14.499618 sshd[5701]: pam_unix(sshd:session): session closed for user core Jul 10 00:38:14.513411 systemd[1]: sshd@13-10.0.0.112:22-10.0.0.1:37242.service: Deactivated successfully. Jul 10 00:38:14.515518 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 00:38:14.517877 systemd-logind[1418]: Session 14 logged out. Waiting for processes to exit. Jul 10 00:38:14.524738 systemd[1]: Started sshd@14-10.0.0.112:22-10.0.0.1:37250.service - OpenSSH per-connection server daemon (10.0.0.1:37250). Jul 10 00:38:14.526082 systemd-logind[1418]: Removed session 14. Jul 10 00:38:14.565393 sshd[5713]: Accepted publickey for core from 10.0.0.1 port 37250 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:38:14.566991 sshd[5713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:38:14.572442 systemd-logind[1418]: New session 15 of user core. Jul 10 00:38:14.577567 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 00:38:16.342250 sshd[5713]: pam_unix(sshd:session): session closed for user core Jul 10 00:38:16.355781 systemd[1]: sshd@14-10.0.0.112:22-10.0.0.1:37250.service: Deactivated successfully. Jul 10 00:38:16.360145 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 00:38:16.362812 systemd-logind[1418]: Session 15 logged out. Waiting for processes to exit. Jul 10 00:38:16.372202 systemd[1]: Started sshd@15-10.0.0.112:22-10.0.0.1:37254.service - OpenSSH per-connection server daemon (10.0.0.1:37254). Jul 10 00:38:16.374401 systemd-logind[1418]: Removed session 15. Jul 10 00:38:16.403398 sshd[5737]: Accepted publickey for core from 10.0.0.1 port 37254 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:38:16.404857 sshd[5737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:38:16.409324 systemd-logind[1418]: New session 16 of user core. Jul 10 00:38:16.415556 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 00:38:16.908796 sshd[5737]: pam_unix(sshd:session): session closed for user core Jul 10 00:38:16.921297 systemd[1]: sshd@15-10.0.0.112:22-10.0.0.1:37254.service: Deactivated successfully. Jul 10 00:38:16.924454 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 00:38:16.926409 systemd-logind[1418]: Session 16 logged out. Waiting for processes to exit. Jul 10 00:38:16.927937 containerd[1431]: time="2025-07-10T00:38:16.927903442Z" level=info msg="StopPodSandbox for \"10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3\"" Jul 10 00:38:16.941739 systemd[1]: Started sshd@16-10.0.0.112:22-10.0.0.1:37258.service - OpenSSH per-connection server daemon (10.0.0.1:37258). Jul 10 00:38:16.943780 systemd-logind[1418]: Removed session 16. Jul 10 00:38:16.973190 sshd[5757]: Accepted publickey for core from 10.0.0.1 port 37258 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:38:16.975089 sshd[5757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:38:16.980355 systemd-logind[1418]: New session 17 of user core. Jul 10 00:38:16.988615 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 00:38:17.100269 containerd[1431]: 2025-07-10 00:38:17.029 [WARNING][5769] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--sqpvz-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"b35917b6-dff4-49b2-b380-4a0514f6d1e8", ResourceVersion:"1164", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 37, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"08b7a0d91afb866c44cde1dc32efbd6011a204b27501e6de6a3f5127edd7f21b", Pod:"goldmane-58fd7646b9-sqpvz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia220035d9ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:38:17.100269 containerd[1431]: 2025-07-10 00:38:17.029 [INFO][5769] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" Jul 10 00:38:17.100269 containerd[1431]: 2025-07-10 00:38:17.029 [INFO][5769] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" iface="eth0" netns="" Jul 10 00:38:17.100269 containerd[1431]: 2025-07-10 00:38:17.029 [INFO][5769] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" Jul 10 00:38:17.100269 containerd[1431]: 2025-07-10 00:38:17.029 [INFO][5769] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" Jul 10 00:38:17.100269 containerd[1431]: 2025-07-10 00:38:17.076 [INFO][5778] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" HandleID="k8s-pod-network.10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" Workload="localhost-k8s-goldmane--58fd7646b9--sqpvz-eth0" Jul 10 00:38:17.100269 containerd[1431]: 2025-07-10 00:38:17.077 [INFO][5778] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:38:17.100269 containerd[1431]: 2025-07-10 00:38:17.077 [INFO][5778] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:38:17.100269 containerd[1431]: 2025-07-10 00:38:17.090 [WARNING][5778] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" HandleID="k8s-pod-network.10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" Workload="localhost-k8s-goldmane--58fd7646b9--sqpvz-eth0" Jul 10 00:38:17.100269 containerd[1431]: 2025-07-10 00:38:17.091 [INFO][5778] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" HandleID="k8s-pod-network.10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" Workload="localhost-k8s-goldmane--58fd7646b9--sqpvz-eth0" Jul 10 00:38:17.100269 containerd[1431]: 2025-07-10 00:38:17.094 [INFO][5778] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:38:17.100269 containerd[1431]: 2025-07-10 00:38:17.098 [INFO][5769] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" Jul 10 00:38:17.101095 containerd[1431]: time="2025-07-10T00:38:17.100315337Z" level=info msg="TearDown network for sandbox \"10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3\" successfully" Jul 10 00:38:17.101095 containerd[1431]: time="2025-07-10T00:38:17.100342417Z" level=info msg="StopPodSandbox for \"10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3\" returns successfully" Jul 10 00:38:17.104470 containerd[1431]: time="2025-07-10T00:38:17.104426579Z" level=info msg="RemovePodSandbox for \"10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3\"" Jul 10 00:38:17.121420 containerd[1431]: time="2025-07-10T00:38:17.121344144Z" level=info msg="Forcibly stopping sandbox \"10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3\"" Jul 10 00:38:17.152846 sshd[5757]: pam_unix(sshd:session): session closed for user core Jul 10 00:38:17.159145 systemd[1]: sshd@16-10.0.0.112:22-10.0.0.1:37258.service: Deactivated successfully. Jul 10 00:38:17.164154 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 00:38:17.166023 systemd-logind[1418]: Session 17 logged out. Waiting for processes to exit. Jul 10 00:38:17.167494 systemd-logind[1418]: Removed session 17. Jul 10 00:38:17.209340 containerd[1431]: 2025-07-10 00:38:17.168 [WARNING][5805] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--sqpvz-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"b35917b6-dff4-49b2-b380-4a0514f6d1e8", ResourceVersion:"1164", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 37, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"08b7a0d91afb866c44cde1dc32efbd6011a204b27501e6de6a3f5127edd7f21b", Pod:"goldmane-58fd7646b9-sqpvz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia220035d9ba", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:38:17.209340 containerd[1431]: 2025-07-10 00:38:17.168 [INFO][5805] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" Jul 10 00:38:17.209340 containerd[1431]: 2025-07-10 00:38:17.168 [INFO][5805] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" iface="eth0" netns="" Jul 10 00:38:17.209340 containerd[1431]: 2025-07-10 00:38:17.168 [INFO][5805] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" Jul 10 00:38:17.209340 containerd[1431]: 2025-07-10 00:38:17.168 [INFO][5805] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" Jul 10 00:38:17.209340 containerd[1431]: 2025-07-10 00:38:17.194 [INFO][5816] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" HandleID="k8s-pod-network.10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" Workload="localhost-k8s-goldmane--58fd7646b9--sqpvz-eth0" Jul 10 00:38:17.209340 containerd[1431]: 2025-07-10 00:38:17.194 [INFO][5816] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:38:17.209340 containerd[1431]: 2025-07-10 00:38:17.194 [INFO][5816] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:38:17.209340 containerd[1431]: 2025-07-10 00:38:17.204 [WARNING][5816] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" HandleID="k8s-pod-network.10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" Workload="localhost-k8s-goldmane--58fd7646b9--sqpvz-eth0" Jul 10 00:38:17.209340 containerd[1431]: 2025-07-10 00:38:17.204 [INFO][5816] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" HandleID="k8s-pod-network.10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" Workload="localhost-k8s-goldmane--58fd7646b9--sqpvz-eth0" Jul 10 00:38:17.209340 containerd[1431]: 2025-07-10 00:38:17.205 [INFO][5816] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:38:17.209340 containerd[1431]: 2025-07-10 00:38:17.207 [INFO][5805] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3" Jul 10 00:38:17.209772 containerd[1431]: time="2025-07-10T00:38:17.209354611Z" level=info msg="TearDown network for sandbox \"10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3\" successfully" Jul 10 00:38:17.231494 containerd[1431]: time="2025-07-10T00:38:17.231435138Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 10 00:38:17.231595 containerd[1431]: time="2025-07-10T00:38:17.231536658Z" level=info msg="RemovePodSandbox \"10a74b38c20886e9ded16f3df3a34d462b11a8954094152a142f59b5ef89ead3\" returns successfully" Jul 10 00:38:17.232156 containerd[1431]: time="2025-07-10T00:38:17.232123898Z" level=info msg="StopPodSandbox for \"a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25\"" Jul 10 00:38:17.310512 containerd[1431]: 2025-07-10 00:38:17.269 [WARNING][5835] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--rsqvw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7c3c9fa5-4474-4544-97e7-30e66ba1f67c", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 37, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a98852a8761daa9d4f556421448ec56845dd7d3e5b419c738fcf15ae90476661", Pod:"coredns-7c65d6cfc9-rsqvw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8f8589d2044", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:38:17.310512 containerd[1431]: 2025-07-10 00:38:17.269 [INFO][5835] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" Jul 10 00:38:17.310512 containerd[1431]: 2025-07-10 00:38:17.269 [INFO][5835] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" iface="eth0" netns="" Jul 10 00:38:17.310512 containerd[1431]: 2025-07-10 00:38:17.269 [INFO][5835] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" Jul 10 00:38:17.310512 containerd[1431]: 2025-07-10 00:38:17.269 [INFO][5835] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" Jul 10 00:38:17.310512 containerd[1431]: 2025-07-10 00:38:17.295 [INFO][5844] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" HandleID="k8s-pod-network.a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" Workload="localhost-k8s-coredns--7c65d6cfc9--rsqvw-eth0" Jul 10 00:38:17.310512 containerd[1431]: 2025-07-10 00:38:17.295 [INFO][5844] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:38:17.310512 containerd[1431]: 2025-07-10 00:38:17.296 [INFO][5844] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:38:17.310512 containerd[1431]: 2025-07-10 00:38:17.305 [WARNING][5844] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" HandleID="k8s-pod-network.a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" Workload="localhost-k8s-coredns--7c65d6cfc9--rsqvw-eth0" Jul 10 00:38:17.310512 containerd[1431]: 2025-07-10 00:38:17.305 [INFO][5844] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" HandleID="k8s-pod-network.a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" Workload="localhost-k8s-coredns--7c65d6cfc9--rsqvw-eth0" Jul 10 00:38:17.310512 containerd[1431]: 2025-07-10 00:38:17.306 [INFO][5844] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:38:17.310512 containerd[1431]: 2025-07-10 00:38:17.308 [INFO][5835] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" Jul 10 00:38:17.310930 containerd[1431]: time="2025-07-10T00:38:17.310555522Z" level=info msg="TearDown network for sandbox \"a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25\" successfully" Jul 10 00:38:17.310930 containerd[1431]: time="2025-07-10T00:38:17.310579322Z" level=info msg="StopPodSandbox for \"a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25\" returns successfully" Jul 10 00:38:17.311094 containerd[1431]: time="2025-07-10T00:38:17.311063203Z" level=info msg="RemovePodSandbox for \"a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25\"" Jul 10 00:38:17.311129 containerd[1431]: time="2025-07-10T00:38:17.311104443Z" level=info msg="Forcibly stopping sandbox \"a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25\"" Jul 10 00:38:17.389265 containerd[1431]: 2025-07-10 00:38:17.347 [WARNING][5862] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--rsqvw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7c3c9fa5-4474-4544-97e7-30e66ba1f67c", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 37, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a98852a8761daa9d4f556421448ec56845dd7d3e5b419c738fcf15ae90476661", Pod:"coredns-7c65d6cfc9-rsqvw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8f8589d2044", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:38:17.389265 containerd[1431]: 2025-07-10 00:38:17.347 [INFO][5862] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" Jul 10 00:38:17.389265 containerd[1431]: 2025-07-10 00:38:17.347 [INFO][5862] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" iface="eth0" netns="" Jul 10 00:38:17.389265 containerd[1431]: 2025-07-10 00:38:17.347 [INFO][5862] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" Jul 10 00:38:17.389265 containerd[1431]: 2025-07-10 00:38:17.347 [INFO][5862] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" Jul 10 00:38:17.389265 containerd[1431]: 2025-07-10 00:38:17.368 [INFO][5870] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" HandleID="k8s-pod-network.a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" Workload="localhost-k8s-coredns--7c65d6cfc9--rsqvw-eth0" Jul 10 00:38:17.389265 containerd[1431]: 2025-07-10 00:38:17.368 [INFO][5870] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:38:17.389265 containerd[1431]: 2025-07-10 00:38:17.369 [INFO][5870] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:38:17.389265 containerd[1431]: 2025-07-10 00:38:17.382 [WARNING][5870] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" HandleID="k8s-pod-network.a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" Workload="localhost-k8s-coredns--7c65d6cfc9--rsqvw-eth0" Jul 10 00:38:17.389265 containerd[1431]: 2025-07-10 00:38:17.382 [INFO][5870] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" HandleID="k8s-pod-network.a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" Workload="localhost-k8s-coredns--7c65d6cfc9--rsqvw-eth0" Jul 10 00:38:17.389265 containerd[1431]: 2025-07-10 00:38:17.383 [INFO][5870] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:38:17.389265 containerd[1431]: 2025-07-10 00:38:17.387 [INFO][5862] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25" Jul 10 00:38:17.389678 containerd[1431]: time="2025-07-10T00:38:17.389324387Z" level=info msg="TearDown network for sandbox \"a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25\" successfully" Jul 10 00:38:17.392129 containerd[1431]: time="2025-07-10T00:38:17.392087068Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 10 00:38:17.392206 containerd[1431]: time="2025-07-10T00:38:17.392182028Z" level=info msg="RemovePodSandbox \"a113916d9b8b3b0ba4d4c0a15a2ca21eefc0e37c35208b6544613272a145dc25\" returns successfully" Jul 10 00:38:17.392810 containerd[1431]: time="2025-07-10T00:38:17.392769388Z" level=info msg="StopPodSandbox for \"67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0\"" Jul 10 00:38:17.475802 containerd[1431]: 2025-07-10 00:38:17.443 [WARNING][5889] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55cdd86887--d59d2-eth0", GenerateName:"calico-kube-controllers-55cdd86887-", Namespace:"calico-system", SelfLink:"", UID:"50dbd3c7-db9e-475c-b96d-679203b54cc6", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 37, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55cdd86887", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ee5fb145f0702fa55c528df019ddc3d6b9030c858038d8393c8fc46640a93360", Pod:"calico-kube-controllers-55cdd86887-d59d2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3e4f83a4738", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:38:17.475802 containerd[1431]: 2025-07-10 00:38:17.444 [INFO][5889] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" Jul 10 00:38:17.475802 containerd[1431]: 2025-07-10 00:38:17.444 [INFO][5889] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" iface="eth0" netns="" Jul 10 00:38:17.475802 containerd[1431]: 2025-07-10 00:38:17.444 [INFO][5889] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" Jul 10 00:38:17.475802 containerd[1431]: 2025-07-10 00:38:17.444 [INFO][5889] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" Jul 10 00:38:17.475802 containerd[1431]: 2025-07-10 00:38:17.462 [INFO][5898] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" HandleID="k8s-pod-network.67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" Workload="localhost-k8s-calico--kube--controllers--55cdd86887--d59d2-eth0" Jul 10 00:38:17.475802 containerd[1431]: 2025-07-10 00:38:17.462 [INFO][5898] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:38:17.475802 containerd[1431]: 2025-07-10 00:38:17.462 [INFO][5898] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:38:17.475802 containerd[1431]: 2025-07-10 00:38:17.470 [WARNING][5898] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" HandleID="k8s-pod-network.67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" Workload="localhost-k8s-calico--kube--controllers--55cdd86887--d59d2-eth0" Jul 10 00:38:17.475802 containerd[1431]: 2025-07-10 00:38:17.470 [INFO][5898] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" HandleID="k8s-pod-network.67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" Workload="localhost-k8s-calico--kube--controllers--55cdd86887--d59d2-eth0" Jul 10 00:38:17.475802 containerd[1431]: 2025-07-10 00:38:17.472 [INFO][5898] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:38:17.475802 containerd[1431]: 2025-07-10 00:38:17.474 [INFO][5889] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" Jul 10 00:38:17.475802 containerd[1431]: time="2025-07-10T00:38:17.475777494Z" level=info msg="TearDown network for sandbox \"67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0\" successfully" Jul 10 00:38:17.475802 containerd[1431]: time="2025-07-10T00:38:17.475803974Z" level=info msg="StopPodSandbox for \"67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0\" returns successfully" Jul 10 00:38:17.477094 containerd[1431]: time="2025-07-10T00:38:17.476256654Z" level=info msg="RemovePodSandbox for \"67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0\"" Jul 10 00:38:17.477094 containerd[1431]: time="2025-07-10T00:38:17.476285214Z" level=info msg="Forcibly stopping sandbox \"67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0\"" Jul 10 00:38:17.542815 containerd[1431]: 2025-07-10 00:38:17.509 [WARNING][5915] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55cdd86887--d59d2-eth0", GenerateName:"calico-kube-controllers-55cdd86887-", Namespace:"calico-system", SelfLink:"", UID:"50dbd3c7-db9e-475c-b96d-679203b54cc6", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 37, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55cdd86887", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ee5fb145f0702fa55c528df019ddc3d6b9030c858038d8393c8fc46640a93360", Pod:"calico-kube-controllers-55cdd86887-d59d2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3e4f83a4738", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:38:17.542815 containerd[1431]: 2025-07-10 00:38:17.509 [INFO][5915] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" Jul 10 00:38:17.542815 containerd[1431]: 2025-07-10 00:38:17.509 [INFO][5915] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" iface="eth0" netns="" Jul 10 00:38:17.542815 containerd[1431]: 2025-07-10 00:38:17.509 [INFO][5915] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" Jul 10 00:38:17.542815 containerd[1431]: 2025-07-10 00:38:17.509 [INFO][5915] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" Jul 10 00:38:17.542815 containerd[1431]: 2025-07-10 00:38:17.528 [INFO][5924] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" HandleID="k8s-pod-network.67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" Workload="localhost-k8s-calico--kube--controllers--55cdd86887--d59d2-eth0" Jul 10 00:38:17.542815 containerd[1431]: 2025-07-10 00:38:17.528 [INFO][5924] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:38:17.542815 containerd[1431]: 2025-07-10 00:38:17.528 [INFO][5924] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:38:17.542815 containerd[1431]: 2025-07-10 00:38:17.536 [WARNING][5924] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" HandleID="k8s-pod-network.67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" Workload="localhost-k8s-calico--kube--controllers--55cdd86887--d59d2-eth0" Jul 10 00:38:17.542815 containerd[1431]: 2025-07-10 00:38:17.536 [INFO][5924] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" HandleID="k8s-pod-network.67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" Workload="localhost-k8s-calico--kube--controllers--55cdd86887--d59d2-eth0" Jul 10 00:38:17.542815 containerd[1431]: 2025-07-10 00:38:17.539 [INFO][5924] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:38:17.542815 containerd[1431]: 2025-07-10 00:38:17.541 [INFO][5915] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0" Jul 10 00:38:17.543269 containerd[1431]: time="2025-07-10T00:38:17.542857634Z" level=info msg="TearDown network for sandbox \"67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0\" successfully" Jul 10 00:38:17.565604 containerd[1431]: time="2025-07-10T00:38:17.550300117Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 10 00:38:17.566020 containerd[1431]: time="2025-07-10T00:38:17.565979362Z" level=info msg="RemovePodSandbox \"67bd3e549f5f174ee4f9b0f81f672161cc24abadba37a46333f43c91c01c4ff0\" returns successfully" Jul 10 00:38:17.566804 containerd[1431]: time="2025-07-10T00:38:17.566566202Z" level=info msg="StopPodSandbox for \"0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b\"" Jul 10 00:38:17.634697 containerd[1431]: 2025-07-10 00:38:17.602 [WARNING][5942] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9b5f696fd--6xg9b-eth0", GenerateName:"calico-apiserver-9b5f696fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"e7ca86f0-bcb9-4f79-9b5e-0dc27b14ae85", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 37, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b5f696fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2ca4ac7511891295855fdbd73df6f7be22ded661684d275822b843f0fc5a06e0", Pod:"calico-apiserver-9b5f696fd-6xg9b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib33295a3608", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:38:17.634697 containerd[1431]: 2025-07-10 00:38:17.603 [INFO][5942] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" Jul 10 00:38:17.634697 containerd[1431]: 2025-07-10 00:38:17.603 [INFO][5942] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" iface="eth0" netns="" Jul 10 00:38:17.634697 containerd[1431]: 2025-07-10 00:38:17.603 [INFO][5942] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" Jul 10 00:38:17.634697 containerd[1431]: 2025-07-10 00:38:17.603 [INFO][5942] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" Jul 10 00:38:17.634697 containerd[1431]: 2025-07-10 00:38:17.621 [INFO][5951] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" HandleID="k8s-pod-network.0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" Workload="localhost-k8s-calico--apiserver--9b5f696fd--6xg9b-eth0" Jul 10 00:38:17.634697 containerd[1431]: 2025-07-10 00:38:17.621 [INFO][5951] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:38:17.634697 containerd[1431]: 2025-07-10 00:38:17.621 [INFO][5951] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:38:17.634697 containerd[1431]: 2025-07-10 00:38:17.629 [WARNING][5951] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" HandleID="k8s-pod-network.0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" Workload="localhost-k8s-calico--apiserver--9b5f696fd--6xg9b-eth0" Jul 10 00:38:17.634697 containerd[1431]: 2025-07-10 00:38:17.629 [INFO][5951] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" HandleID="k8s-pod-network.0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" Workload="localhost-k8s-calico--apiserver--9b5f696fd--6xg9b-eth0" Jul 10 00:38:17.634697 containerd[1431]: 2025-07-10 00:38:17.631 [INFO][5951] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:38:17.634697 containerd[1431]: 2025-07-10 00:38:17.633 [INFO][5942] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" Jul 10 00:38:17.635091 containerd[1431]: time="2025-07-10T00:38:17.634751623Z" level=info msg="TearDown network for sandbox \"0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b\" successfully" Jul 10 00:38:17.635091 containerd[1431]: time="2025-07-10T00:38:17.634780343Z" level=info msg="StopPodSandbox for \"0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b\" returns successfully" Jul 10 00:38:17.635397 containerd[1431]: time="2025-07-10T00:38:17.635342503Z" level=info msg="RemovePodSandbox for \"0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b\"" Jul 10 00:38:17.635446 containerd[1431]: time="2025-07-10T00:38:17.635407503Z" level=info msg="Forcibly stopping sandbox \"0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b\"" Jul 10 00:38:17.700106 containerd[1431]: 2025-07-10 00:38:17.667 [WARNING][5968] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9b5f696fd--6xg9b-eth0", GenerateName:"calico-apiserver-9b5f696fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"e7ca86f0-bcb9-4f79-9b5e-0dc27b14ae85", ResourceVersion:"1116", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 37, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b5f696fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2ca4ac7511891295855fdbd73df6f7be22ded661684d275822b843f0fc5a06e0", Pod:"calico-apiserver-9b5f696fd-6xg9b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib33295a3608", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:38:17.700106 containerd[1431]: 2025-07-10 00:38:17.667 [INFO][5968] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" Jul 10 00:38:17.700106 containerd[1431]: 2025-07-10 00:38:17.667 [INFO][5968] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" iface="eth0" netns="" Jul 10 00:38:17.700106 containerd[1431]: 2025-07-10 00:38:17.667 [INFO][5968] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" Jul 10 00:38:17.700106 containerd[1431]: 2025-07-10 00:38:17.668 [INFO][5968] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" Jul 10 00:38:17.700106 containerd[1431]: 2025-07-10 00:38:17.686 [INFO][5977] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" HandleID="k8s-pod-network.0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" Workload="localhost-k8s-calico--apiserver--9b5f696fd--6xg9b-eth0" Jul 10 00:38:17.700106 containerd[1431]: 2025-07-10 00:38:17.686 [INFO][5977] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:38:17.700106 containerd[1431]: 2025-07-10 00:38:17.686 [INFO][5977] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:38:17.700106 containerd[1431]: 2025-07-10 00:38:17.695 [WARNING][5977] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" HandleID="k8s-pod-network.0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" Workload="localhost-k8s-calico--apiserver--9b5f696fd--6xg9b-eth0" Jul 10 00:38:17.700106 containerd[1431]: 2025-07-10 00:38:17.695 [INFO][5977] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" HandleID="k8s-pod-network.0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" Workload="localhost-k8s-calico--apiserver--9b5f696fd--6xg9b-eth0" Jul 10 00:38:17.700106 containerd[1431]: 2025-07-10 00:38:17.696 [INFO][5977] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:38:17.700106 containerd[1431]: 2025-07-10 00:38:17.698 [INFO][5968] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b" Jul 10 00:38:17.700557 containerd[1431]: time="2025-07-10T00:38:17.700146003Z" level=info msg="TearDown network for sandbox \"0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b\" successfully" Jul 10 00:38:17.711842 containerd[1431]: time="2025-07-10T00:38:17.711789127Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 10 00:38:17.711901 containerd[1431]: time="2025-07-10T00:38:17.711867727Z" level=info msg="RemovePodSandbox \"0e30b543556f6bcf9c6b961c07962ddb7d49c424ec8b29ae1b1296ecc4785a0b\" returns successfully" Jul 10 00:38:17.712353 containerd[1431]: time="2025-07-10T00:38:17.712310647Z" level=info msg="StopPodSandbox for \"b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a\"" Jul 10 00:38:17.779479 containerd[1431]: 2025-07-10 00:38:17.745 [WARNING][5995] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" WorkloadEndpoint="localhost-k8s-whisker--5b6dd46dd7--xjrpt-eth0" Jul 10 00:38:17.779479 containerd[1431]: 2025-07-10 00:38:17.745 [INFO][5995] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" Jul 10 00:38:17.779479 containerd[1431]: 2025-07-10 00:38:17.745 [INFO][5995] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" iface="eth0" netns="" Jul 10 00:38:17.779479 containerd[1431]: 2025-07-10 00:38:17.745 [INFO][5995] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" Jul 10 00:38:17.779479 containerd[1431]: 2025-07-10 00:38:17.745 [INFO][5995] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" Jul 10 00:38:17.779479 containerd[1431]: 2025-07-10 00:38:17.764 [INFO][6004] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" HandleID="k8s-pod-network.b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" Workload="localhost-k8s-whisker--5b6dd46dd7--xjrpt-eth0" Jul 10 00:38:17.779479 containerd[1431]: 2025-07-10 00:38:17.764 [INFO][6004] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:38:17.779479 containerd[1431]: 2025-07-10 00:38:17.764 [INFO][6004] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:38:17.779479 containerd[1431]: 2025-07-10 00:38:17.774 [WARNING][6004] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" HandleID="k8s-pod-network.b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" Workload="localhost-k8s-whisker--5b6dd46dd7--xjrpt-eth0" Jul 10 00:38:17.779479 containerd[1431]: 2025-07-10 00:38:17.774 [INFO][6004] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" HandleID="k8s-pod-network.b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" Workload="localhost-k8s-whisker--5b6dd46dd7--xjrpt-eth0" Jul 10 00:38:17.779479 containerd[1431]: 2025-07-10 00:38:17.775 [INFO][6004] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:38:17.779479 containerd[1431]: 2025-07-10 00:38:17.777 [INFO][5995] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" Jul 10 00:38:17.779479 containerd[1431]: time="2025-07-10T00:38:17.779414028Z" level=info msg="TearDown network for sandbox \"b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a\" successfully" Jul 10 00:38:17.779479 containerd[1431]: time="2025-07-10T00:38:17.779442108Z" level=info msg="StopPodSandbox for \"b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a\" returns successfully" Jul 10 00:38:17.779924 containerd[1431]: time="2025-07-10T00:38:17.779889948Z" level=info msg="RemovePodSandbox for \"b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a\"" Jul 10 00:38:17.779961 containerd[1431]: time="2025-07-10T00:38:17.779931228Z" level=info msg="Forcibly stopping sandbox \"b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a\"" Jul 10 00:38:17.844669 containerd[1431]: 2025-07-10 00:38:17.812 [WARNING][6023] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" WorkloadEndpoint="localhost-k8s-whisker--5b6dd46dd7--xjrpt-eth0" Jul 10 00:38:17.844669 containerd[1431]: 2025-07-10 00:38:17.813 [INFO][6023] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" Jul 10 00:38:17.844669 containerd[1431]: 2025-07-10 00:38:17.813 [INFO][6023] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" iface="eth0" netns="" Jul 10 00:38:17.844669 containerd[1431]: 2025-07-10 00:38:17.813 [INFO][6023] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" Jul 10 00:38:17.844669 containerd[1431]: 2025-07-10 00:38:17.813 [INFO][6023] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" Jul 10 00:38:17.844669 containerd[1431]: 2025-07-10 00:38:17.831 [INFO][6031] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" HandleID="k8s-pod-network.b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" Workload="localhost-k8s-whisker--5b6dd46dd7--xjrpt-eth0" Jul 10 00:38:17.844669 containerd[1431]: 2025-07-10 00:38:17.831 [INFO][6031] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:38:17.844669 containerd[1431]: 2025-07-10 00:38:17.831 [INFO][6031] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:38:17.844669 containerd[1431]: 2025-07-10 00:38:17.839 [WARNING][6031] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" HandleID="k8s-pod-network.b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" Workload="localhost-k8s-whisker--5b6dd46dd7--xjrpt-eth0" Jul 10 00:38:17.844669 containerd[1431]: 2025-07-10 00:38:17.839 [INFO][6031] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" HandleID="k8s-pod-network.b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" Workload="localhost-k8s-whisker--5b6dd46dd7--xjrpt-eth0" Jul 10 00:38:17.844669 containerd[1431]: 2025-07-10 00:38:17.841 [INFO][6031] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:38:17.844669 containerd[1431]: 2025-07-10 00:38:17.842 [INFO][6023] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a" Jul 10 00:38:17.845002 containerd[1431]: time="2025-07-10T00:38:17.844706088Z" level=info msg="TearDown network for sandbox \"b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a\" successfully" Jul 10 00:38:17.847353 containerd[1431]: time="2025-07-10T00:38:17.847316809Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 10 00:38:17.847489 containerd[1431]: time="2025-07-10T00:38:17.847387049Z" level=info msg="RemovePodSandbox \"b7df34fffe2b1787eb828f67ddfb2827d417dcbf2e4c791ab02afae04413888a\" returns successfully" Jul 10 00:38:17.848234 containerd[1431]: time="2025-07-10T00:38:17.847948809Z" level=info msg="StopPodSandbox for \"f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95\"" Jul 10 00:38:17.911463 containerd[1431]: 2025-07-10 00:38:17.879 [WARNING][6049] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--qzjj6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2d71dbf9-7620-445d-8d35-2cc9ef195ea7", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 37, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fcf86e14ef41e8cb38c862903f45486efd530bfb95d7abc0f744ba950ba593d5", Pod:"coredns-7c65d6cfc9-qzjj6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali409f727fa17", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:38:17.911463 containerd[1431]: 2025-07-10 00:38:17.879 [INFO][6049] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" Jul 10 00:38:17.911463 containerd[1431]: 2025-07-10 00:38:17.879 [INFO][6049] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" iface="eth0" netns="" Jul 10 00:38:17.911463 containerd[1431]: 2025-07-10 00:38:17.879 [INFO][6049] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" Jul 10 00:38:17.911463 containerd[1431]: 2025-07-10 00:38:17.879 [INFO][6049] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" Jul 10 00:38:17.911463 containerd[1431]: 2025-07-10 00:38:17.897 [INFO][6058] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" HandleID="k8s-pod-network.f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" Workload="localhost-k8s-coredns--7c65d6cfc9--qzjj6-eth0" Jul 10 00:38:17.911463 containerd[1431]: 2025-07-10 00:38:17.898 [INFO][6058] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:38:17.911463 containerd[1431]: 2025-07-10 00:38:17.898 [INFO][6058] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:38:17.911463 containerd[1431]: 2025-07-10 00:38:17.906 [WARNING][6058] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" HandleID="k8s-pod-network.f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" Workload="localhost-k8s-coredns--7c65d6cfc9--qzjj6-eth0" Jul 10 00:38:17.911463 containerd[1431]: 2025-07-10 00:38:17.906 [INFO][6058] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" HandleID="k8s-pod-network.f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" Workload="localhost-k8s-coredns--7c65d6cfc9--qzjj6-eth0" Jul 10 00:38:17.911463 containerd[1431]: 2025-07-10 00:38:17.908 [INFO][6058] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:38:17.911463 containerd[1431]: 2025-07-10 00:38:17.909 [INFO][6049] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" Jul 10 00:38:17.912220 containerd[1431]: time="2025-07-10T00:38:17.911936869Z" level=info msg="TearDown network for sandbox \"f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95\" successfully" Jul 10 00:38:17.912220 containerd[1431]: time="2025-07-10T00:38:17.911967269Z" level=info msg="StopPodSandbox for \"f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95\" returns successfully" Jul 10 00:38:17.912486 containerd[1431]: time="2025-07-10T00:38:17.912428989Z" level=info msg="RemovePodSandbox for \"f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95\"" Jul 10 00:38:17.912486 containerd[1431]: time="2025-07-10T00:38:17.912462309Z" level=info msg="Forcibly stopping sandbox \"f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95\"" Jul 10 00:38:17.978314 containerd[1431]: 2025-07-10 00:38:17.946 [WARNING][6076] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--qzjj6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2d71dbf9-7620-445d-8d35-2cc9ef195ea7", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 37, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fcf86e14ef41e8cb38c862903f45486efd530bfb95d7abc0f744ba950ba593d5", Pod:"coredns-7c65d6cfc9-qzjj6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali409f727fa17", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:38:17.978314 containerd[1431]: 2025-07-10 00:38:17.946 [INFO][6076] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" Jul 10 00:38:17.978314 containerd[1431]: 2025-07-10 00:38:17.946 [INFO][6076] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" iface="eth0" netns="" Jul 10 00:38:17.978314 containerd[1431]: 2025-07-10 00:38:17.946 [INFO][6076] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" Jul 10 00:38:17.978314 containerd[1431]: 2025-07-10 00:38:17.946 [INFO][6076] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" Jul 10 00:38:17.978314 containerd[1431]: 2025-07-10 00:38:17.963 [INFO][6085] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" HandleID="k8s-pod-network.f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" Workload="localhost-k8s-coredns--7c65d6cfc9--qzjj6-eth0" Jul 10 00:38:17.978314 containerd[1431]: 2025-07-10 00:38:17.964 [INFO][6085] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:38:17.978314 containerd[1431]: 2025-07-10 00:38:17.964 [INFO][6085] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:38:17.978314 containerd[1431]: 2025-07-10 00:38:17.973 [WARNING][6085] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" HandleID="k8s-pod-network.f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" Workload="localhost-k8s-coredns--7c65d6cfc9--qzjj6-eth0" Jul 10 00:38:17.978314 containerd[1431]: 2025-07-10 00:38:17.973 [INFO][6085] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" HandleID="k8s-pod-network.f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" Workload="localhost-k8s-coredns--7c65d6cfc9--qzjj6-eth0" Jul 10 00:38:17.978314 containerd[1431]: 2025-07-10 00:38:17.975 [INFO][6085] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:38:17.978314 containerd[1431]: 2025-07-10 00:38:17.976 [INFO][6076] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95" Jul 10 00:38:17.979939 containerd[1431]: time="2025-07-10T00:38:17.978459169Z" level=info msg="TearDown network for sandbox \"f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95\" successfully" Jul 10 00:38:17.983118 containerd[1431]: time="2025-07-10T00:38:17.983049691Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 10 00:38:17.983248 containerd[1431]: time="2025-07-10T00:38:17.983229531Z" level=info msg="RemovePodSandbox \"f70038e1a2e67210a73990fd868e7bf796e16aa35076e07dc27686c5f011ea95\" returns successfully" Jul 10 00:38:17.983926 containerd[1431]: time="2025-07-10T00:38:17.983901771Z" level=info msg="StopPodSandbox for \"89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df\"" Jul 10 00:38:18.065770 containerd[1431]: 2025-07-10 00:38:18.031 [WARNING][6103] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qd5xx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3b36436a-7d97-4120-92b3-49bbe1e5480c", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 37, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"612f23f8936b1fb90e664913425d463ccf10b73280875cdd6a5ed062d79bc3c5", Pod:"csi-node-driver-qd5xx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califeab87cf85f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:38:18.065770 containerd[1431]: 2025-07-10 00:38:18.032 [INFO][6103] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" Jul 10 00:38:18.065770 containerd[1431]: 2025-07-10 00:38:18.032 [INFO][6103] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" iface="eth0" netns="" Jul 10 00:38:18.065770 containerd[1431]: 2025-07-10 00:38:18.032 [INFO][6103] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" Jul 10 00:38:18.065770 containerd[1431]: 2025-07-10 00:38:18.032 [INFO][6103] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" Jul 10 00:38:18.065770 containerd[1431]: 2025-07-10 00:38:18.051 [INFO][6112] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" HandleID="k8s-pod-network.89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" Workload="localhost-k8s-csi--node--driver--qd5xx-eth0" Jul 10 00:38:18.065770 containerd[1431]: 2025-07-10 00:38:18.051 [INFO][6112] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:38:18.065770 containerd[1431]: 2025-07-10 00:38:18.051 [INFO][6112] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:38:18.065770 containerd[1431]: 2025-07-10 00:38:18.060 [WARNING][6112] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" HandleID="k8s-pod-network.89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" Workload="localhost-k8s-csi--node--driver--qd5xx-eth0" Jul 10 00:38:18.065770 containerd[1431]: 2025-07-10 00:38:18.060 [INFO][6112] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" HandleID="k8s-pod-network.89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" Workload="localhost-k8s-csi--node--driver--qd5xx-eth0" Jul 10 00:38:18.065770 containerd[1431]: 2025-07-10 00:38:18.062 [INFO][6112] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:38:18.065770 containerd[1431]: 2025-07-10 00:38:18.064 [INFO][6103] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" Jul 10 00:38:18.065770 containerd[1431]: time="2025-07-10T00:38:18.065734235Z" level=info msg="TearDown network for sandbox \"89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df\" successfully" Jul 10 00:38:18.065770 containerd[1431]: time="2025-07-10T00:38:18.065759235Z" level=info msg="StopPodSandbox for \"89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df\" returns successfully" Jul 10 00:38:18.066255 containerd[1431]: time="2025-07-10T00:38:18.066196755Z" level=info msg="RemovePodSandbox for \"89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df\"" Jul 10 00:38:18.066255 containerd[1431]: time="2025-07-10T00:38:18.066233035Z" level=info msg="Forcibly stopping sandbox \"89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df\"" Jul 10 00:38:18.137701 containerd[1431]: 2025-07-10 00:38:18.104 [WARNING][6130] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qd5xx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3b36436a-7d97-4120-92b3-49bbe1e5480c", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 37, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"612f23f8936b1fb90e664913425d463ccf10b73280875cdd6a5ed062d79bc3c5", Pod:"csi-node-driver-qd5xx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califeab87cf85f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:38:18.137701 containerd[1431]: 2025-07-10 00:38:18.104 [INFO][6130] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" Jul 10 00:38:18.137701 containerd[1431]: 2025-07-10 00:38:18.104 [INFO][6130] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" iface="eth0" netns="" Jul 10 00:38:18.137701 containerd[1431]: 2025-07-10 00:38:18.104 [INFO][6130] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" Jul 10 00:38:18.137701 containerd[1431]: 2025-07-10 00:38:18.104 [INFO][6130] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" Jul 10 00:38:18.137701 containerd[1431]: 2025-07-10 00:38:18.122 [INFO][6139] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" HandleID="k8s-pod-network.89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" Workload="localhost-k8s-csi--node--driver--qd5xx-eth0" Jul 10 00:38:18.137701 containerd[1431]: 2025-07-10 00:38:18.122 [INFO][6139] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:38:18.137701 containerd[1431]: 2025-07-10 00:38:18.122 [INFO][6139] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:38:18.137701 containerd[1431]: 2025-07-10 00:38:18.130 [WARNING][6139] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" HandleID="k8s-pod-network.89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" Workload="localhost-k8s-csi--node--driver--qd5xx-eth0" Jul 10 00:38:18.137701 containerd[1431]: 2025-07-10 00:38:18.130 [INFO][6139] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" HandleID="k8s-pod-network.89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" Workload="localhost-k8s-csi--node--driver--qd5xx-eth0" Jul 10 00:38:18.137701 containerd[1431]: 2025-07-10 00:38:18.133 [INFO][6139] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:38:18.137701 containerd[1431]: 2025-07-10 00:38:18.135 [INFO][6130] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df" Jul 10 00:38:18.138197 containerd[1431]: time="2025-07-10T00:38:18.137735296Z" level=info msg="TearDown network for sandbox \"89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df\" successfully" Jul 10 00:38:18.140399 containerd[1431]: time="2025-07-10T00:38:18.140343297Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 10 00:38:18.140615 containerd[1431]: time="2025-07-10T00:38:18.140413937Z" level=info msg="RemovePodSandbox \"89720421f3246515c804d699fc03aa20c5e6528c383286f148a9915b076ca9df\" returns successfully" Jul 10 00:38:18.141032 containerd[1431]: time="2025-07-10T00:38:18.140916537Z" level=info msg="StopPodSandbox for \"4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6\"" Jul 10 00:38:18.220739 containerd[1431]: 2025-07-10 00:38:18.187 [WARNING][6156] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9b5f696fd--429ml-eth0", GenerateName:"calico-apiserver-9b5f696fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"1f658df4-5506-4d46-bb23-7f9741b9a122", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 37, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b5f696fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ccb47c543917df4bed1b333fc01e70798addb524c7ac41e593e16652e0b4209", Pod:"calico-apiserver-9b5f696fd-429ml", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali468623cb9d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:38:18.220739 containerd[1431]: 2025-07-10 00:38:18.187 [INFO][6156] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" Jul 10 00:38:18.220739 containerd[1431]: 2025-07-10 00:38:18.187 [INFO][6156] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" iface="eth0" netns="" Jul 10 00:38:18.220739 containerd[1431]: 2025-07-10 00:38:18.187 [INFO][6156] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" Jul 10 00:38:18.220739 containerd[1431]: 2025-07-10 00:38:18.187 [INFO][6156] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" Jul 10 00:38:18.220739 containerd[1431]: 2025-07-10 00:38:18.204 [INFO][6165] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" HandleID="k8s-pod-network.4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" Workload="localhost-k8s-calico--apiserver--9b5f696fd--429ml-eth0" Jul 10 00:38:18.220739 containerd[1431]: 2025-07-10 00:38:18.204 [INFO][6165] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:38:18.220739 containerd[1431]: 2025-07-10 00:38:18.204 [INFO][6165] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:38:18.220739 containerd[1431]: 2025-07-10 00:38:18.215 [WARNING][6165] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" HandleID="k8s-pod-network.4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" Workload="localhost-k8s-calico--apiserver--9b5f696fd--429ml-eth0" Jul 10 00:38:18.220739 containerd[1431]: 2025-07-10 00:38:18.215 [INFO][6165] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" HandleID="k8s-pod-network.4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" Workload="localhost-k8s-calico--apiserver--9b5f696fd--429ml-eth0" Jul 10 00:38:18.220739 containerd[1431]: 2025-07-10 00:38:18.216 [INFO][6165] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:38:18.220739 containerd[1431]: 2025-07-10 00:38:18.218 [INFO][6156] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" Jul 10 00:38:18.221583 containerd[1431]: time="2025-07-10T00:38:18.221209480Z" level=info msg="TearDown network for sandbox \"4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6\" successfully" Jul 10 00:38:18.221583 containerd[1431]: time="2025-07-10T00:38:18.221256400Z" level=info msg="StopPodSandbox for \"4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6\" returns successfully" Jul 10 00:38:18.222298 containerd[1431]: time="2025-07-10T00:38:18.221994120Z" level=info msg="RemovePodSandbox for \"4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6\"" Jul 10 00:38:18.222298 containerd[1431]: time="2025-07-10T00:38:18.222025720Z" level=info msg="Forcibly stopping sandbox \"4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6\"" Jul 10 00:38:18.301154 containerd[1431]: 2025-07-10 00:38:18.257 [WARNING][6183] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9b5f696fd--429ml-eth0", GenerateName:"calico-apiserver-9b5f696fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"1f658df4-5506-4d46-bb23-7f9741b9a122", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 37, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b5f696fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ccb47c543917df4bed1b333fc01e70798addb524c7ac41e593e16652e0b4209", Pod:"calico-apiserver-9b5f696fd-429ml", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali468623cb9d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:38:18.301154 containerd[1431]: 2025-07-10 00:38:18.258 [INFO][6183] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" Jul 10 00:38:18.301154 containerd[1431]: 2025-07-10 00:38:18.258 [INFO][6183] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" iface="eth0" netns="" Jul 10 00:38:18.301154 containerd[1431]: 2025-07-10 00:38:18.258 [INFO][6183] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" Jul 10 00:38:18.301154 containerd[1431]: 2025-07-10 00:38:18.258 [INFO][6183] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" Jul 10 00:38:18.301154 containerd[1431]: 2025-07-10 00:38:18.284 [INFO][6192] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" HandleID="k8s-pod-network.4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" Workload="localhost-k8s-calico--apiserver--9b5f696fd--429ml-eth0" Jul 10 00:38:18.301154 containerd[1431]: 2025-07-10 00:38:18.284 [INFO][6192] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:38:18.301154 containerd[1431]: 2025-07-10 00:38:18.284 [INFO][6192] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:38:18.301154 containerd[1431]: 2025-07-10 00:38:18.296 [WARNING][6192] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" HandleID="k8s-pod-network.4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" Workload="localhost-k8s-calico--apiserver--9b5f696fd--429ml-eth0" Jul 10 00:38:18.301154 containerd[1431]: 2025-07-10 00:38:18.296 [INFO][6192] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" HandleID="k8s-pod-network.4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" Workload="localhost-k8s-calico--apiserver--9b5f696fd--429ml-eth0" Jul 10 00:38:18.301154 containerd[1431]: 2025-07-10 00:38:18.297 [INFO][6192] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:38:18.301154 containerd[1431]: 2025-07-10 00:38:18.299 [INFO][6183] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6" Jul 10 00:38:18.303345 containerd[1431]: time="2025-07-10T00:38:18.301664024Z" level=info msg="TearDown network for sandbox \"4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6\" successfully" Jul 10 00:38:18.304662 containerd[1431]: time="2025-07-10T00:38:18.304628704Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 10 00:38:18.304782 containerd[1431]: time="2025-07-10T00:38:18.304765105Z" level=info msg="RemovePodSandbox \"4f7c72956f2b92ad47598c20bd537d65c991e7fd054f449dabcc3377382187a6\" returns successfully" Jul 10 00:38:20.414929 systemd[1]: run-containerd-runc-k8s.io-223d7465fbd372cf6b6d806aa0eda7791352ce5d4c417c9f153573f0c87fc7be-runc.RBmURe.mount: Deactivated successfully. Jul 10 00:38:22.163283 systemd[1]: Started sshd@17-10.0.0.112:22-10.0.0.1:37272.service - OpenSSH per-connection server daemon (10.0.0.1:37272). Jul 10 00:38:22.230431 sshd[6248]: Accepted publickey for core from 10.0.0.1 port 37272 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:38:22.234025 sshd[6248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:38:22.240081 systemd-logind[1418]: New session 18 of user core. Jul 10 00:38:22.245572 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 00:38:22.376933 sshd[6248]: pam_unix(sshd:session): session closed for user core Jul 10 00:38:22.380328 systemd[1]: sshd@17-10.0.0.112:22-10.0.0.1:37272.service: Deactivated successfully. Jul 10 00:38:22.382059 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 00:38:22.383429 systemd-logind[1418]: Session 18 logged out. Waiting for processes to exit. Jul 10 00:38:22.384470 systemd-logind[1418]: Removed session 18. Jul 10 00:38:25.515518 kubelet[2453]: I0710 00:38:25.515348 2453 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:38:27.391754 systemd[1]: Started sshd@18-10.0.0.112:22-10.0.0.1:58458.service - OpenSSH per-connection server daemon (10.0.0.1:58458). Jul 10 00:38:27.432200 sshd[6267]: Accepted publickey for core from 10.0.0.1 port 58458 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:38:27.432986 sshd[6267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:38:27.441283 systemd-logind[1418]: New session 19 of user core. Jul 10 00:38:27.457371 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 00:38:27.594637 sshd[6267]: pam_unix(sshd:session): session closed for user core Jul 10 00:38:27.597786 systemd[1]: sshd@18-10.0.0.112:22-10.0.0.1:58458.service: Deactivated successfully. Jul 10 00:38:27.601522 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 00:38:27.603166 systemd-logind[1418]: Session 19 logged out. Waiting for processes to exit. Jul 10 00:38:27.604180 systemd-logind[1418]: Removed session 19. Jul 10 00:38:32.609681 systemd[1]: Started sshd@19-10.0.0.112:22-10.0.0.1:46806.service - OpenSSH per-connection server daemon (10.0.0.1:46806). Jul 10 00:38:32.661022 sshd[6283]: Accepted publickey for core from 10.0.0.1 port 46806 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:38:32.663101 sshd[6283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:38:32.669454 systemd-logind[1418]: New session 20 of user core. Jul 10 00:38:32.676805 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 00:38:32.934887 sshd[6283]: pam_unix(sshd:session): session closed for user core Jul 10 00:38:32.938396 systemd[1]: sshd@19-10.0.0.112:22-10.0.0.1:46806.service: Deactivated successfully. Jul 10 00:38:32.940899 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 00:38:32.941508 systemd-logind[1418]: Session 20 logged out. Waiting for processes to exit. Jul 10 00:38:32.943057 systemd-logind[1418]: Removed session 20.