Jul 14 21:46:19.897433 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 14 21:46:19.897467 kernel: Linux version 6.6.97-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Jul 14 20:26:44 -00 2025 Jul 14 21:46:19.897477 kernel: KASLR enabled Jul 14 21:46:19.897483 kernel: efi: EFI v2.7 by EDK II Jul 14 21:46:19.897489 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 14 21:46:19.897495 kernel: random: crng init done Jul 14 21:46:19.897502 kernel: ACPI: Early table checksum verification disabled Jul 14 21:46:19.897507 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 14 21:46:19.897513 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 14 21:46:19.897521 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:46:19.897527 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:46:19.897533 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:46:19.897538 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:46:19.897544 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:46:19.897552 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:46:19.897560 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:46:19.897566 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:46:19.897572 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:46:19.897579 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 14 21:46:19.897585 kernel: NUMA: Failed to initialise from firmware Jul 14 21:46:19.897591 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:46:19.897598 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Jul 14 21:46:19.897604 kernel: Zone ranges: Jul 14 21:46:19.897610 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:46:19.897617 kernel: DMA32 empty Jul 14 21:46:19.897625 kernel: Normal empty Jul 14 21:46:19.897631 kernel: Movable zone start for each node Jul 14 21:46:19.897637 kernel: Early memory node ranges Jul 14 21:46:19.897643 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 14 21:46:19.897650 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 14 21:46:19.897656 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 14 21:46:19.897662 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 14 21:46:19.897668 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 14 21:46:19.897675 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 14 21:46:19.897681 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 14 21:46:19.897687 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:46:19.897693 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 14 21:46:19.897701 kernel: psci: probing for conduit method from ACPI. Jul 14 21:46:19.897712 kernel: psci: PSCIv1.1 detected in firmware. Jul 14 21:46:19.897721 kernel: psci: Using standard PSCI v0.2 function IDs Jul 14 21:46:19.897731 kernel: psci: Trusted OS migration not required Jul 14 21:46:19.897738 kernel: psci: SMC Calling Convention v1.1 Jul 14 21:46:19.897745 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 14 21:46:19.897753 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 14 21:46:19.897759 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 14 21:46:19.897766 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 14 21:46:19.897773 kernel: Detected PIPT I-cache on CPU0 Jul 14 21:46:19.897779 kernel: CPU features: detected: GIC system register CPU interface Jul 14 21:46:19.897786 kernel: CPU features: detected: Hardware dirty bit management Jul 14 21:46:19.897793 kernel: CPU features: detected: Spectre-v4 Jul 14 21:46:19.897799 kernel: CPU features: detected: Spectre-BHB Jul 14 21:46:19.897806 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 14 21:46:19.897813 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 14 21:46:19.897821 kernel: CPU features: detected: ARM erratum 1418040 Jul 14 21:46:19.897827 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 14 21:46:19.897834 kernel: alternatives: applying boot alternatives Jul 14 21:46:19.897842 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=219fd31147cccfc1f4834c1854a4109714661cabce52e86d5c93000af393c45b Jul 14 21:46:19.897849 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 14 21:46:19.897856 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 14 21:46:19.897862 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 14 21:46:19.897869 kernel: Fallback order for Node 0: 0 Jul 14 21:46:19.897876 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 14 21:46:19.897882 kernel: Policy zone: DMA Jul 14 21:46:19.897889 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 14 21:46:19.897897 kernel: software IO TLB: area num 4. Jul 14 21:46:19.897904 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 14 21:46:19.897911 kernel: Memory: 2386400K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185888K reserved, 0K cma-reserved) Jul 14 21:46:19.897918 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 14 21:46:19.897924 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 14 21:46:19.897931 kernel: rcu: RCU event tracing is enabled. Jul 14 21:46:19.897938 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 14 21:46:19.897945 kernel: Trampoline variant of Tasks RCU enabled. Jul 14 21:46:19.897952 kernel: Tracing variant of Tasks RCU enabled. Jul 14 21:46:19.897959 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 14 21:46:19.897965 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 14 21:46:19.897972 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 14 21:46:19.897980 kernel: GICv3: 256 SPIs implemented Jul 14 21:46:19.897987 kernel: GICv3: 0 Extended SPIs implemented Jul 14 21:46:19.897993 kernel: Root IRQ handler: gic_handle_irq Jul 14 21:46:19.898000 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 14 21:46:19.898007 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 14 21:46:19.898013 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 14 21:46:19.898020 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 14 21:46:19.898027 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 14 21:46:19.898034 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 14 21:46:19.898040 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 14 21:46:19.898047 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 14 21:46:19.898055 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:46:19.898062 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 14 21:46:19.898069 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 14 21:46:19.898076 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 14 21:46:19.898082 kernel: arm-pv: using stolen time PV Jul 14 21:46:19.898090 kernel: Console: colour dummy device 80x25 Jul 14 21:46:19.898096 kernel: ACPI: Core revision 20230628 Jul 14 21:46:19.898103 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 14 21:46:19.898110 kernel: pid_max: default: 32768 minimum: 301 Jul 14 21:46:19.898117 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 14 21:46:19.898125 kernel: landlock: Up and running. Jul 14 21:46:19.898132 kernel: SELinux: Initializing. Jul 14 21:46:19.898139 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 21:46:19.898146 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 21:46:19.898153 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 21:46:19.898160 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 21:46:19.898166 kernel: rcu: Hierarchical SRCU implementation. Jul 14 21:46:19.898173 kernel: rcu: Max phase no-delay instances is 400. Jul 14 21:46:19.898180 kernel: Platform MSI: ITS@0x8080000 domain created Jul 14 21:46:19.898188 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 14 21:46:19.898195 kernel: Remapping and enabling EFI services. Jul 14 21:46:19.898202 kernel: smp: Bringing up secondary CPUs ... Jul 14 21:46:19.898209 kernel: Detected PIPT I-cache on CPU1 Jul 14 21:46:19.898216 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 14 21:46:19.898223 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 14 21:46:19.898230 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:46:19.898237 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 14 21:46:19.898243 kernel: Detected PIPT I-cache on CPU2 Jul 14 21:46:19.898250 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 14 21:46:19.898259 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 14 21:46:19.898266 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:46:19.898278 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 14 21:46:19.898287 kernel: Detected PIPT I-cache on CPU3 Jul 14 21:46:19.898294 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 14 21:46:19.898301 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 14 21:46:19.898309 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:46:19.898315 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 14 21:46:19.898323 kernel: smp: Brought up 1 node, 4 CPUs Jul 14 21:46:19.898332 kernel: SMP: Total of 4 processors activated. Jul 14 21:46:19.898339 kernel: CPU features: detected: 32-bit EL0 Support Jul 14 21:46:19.898346 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 14 21:46:19.898354 kernel: CPU features: detected: Common not Private translations Jul 14 21:46:19.898361 kernel: CPU features: detected: CRC32 instructions Jul 14 21:46:19.898368 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 14 21:46:19.898376 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 14 21:46:19.898383 kernel: CPU features: detected: LSE atomic instructions Jul 14 21:46:19.898392 kernel: CPU features: detected: Privileged Access Never Jul 14 21:46:19.898399 kernel: CPU features: detected: RAS Extension Support Jul 14 21:46:19.898407 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 14 21:46:19.898414 kernel: CPU: All CPU(s) started at EL1 Jul 14 21:46:19.898422 kernel: alternatives: applying system-wide alternatives Jul 14 21:46:19.898429 kernel: devtmpfs: initialized Jul 14 21:46:19.898454 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 14 21:46:19.898462 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 14 21:46:19.898469 kernel: pinctrl core: initialized pinctrl subsystem Jul 14 21:46:19.898479 kernel: SMBIOS 3.0.0 present. Jul 14 21:46:19.898487 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 14 21:46:19.898494 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 14 21:46:19.898501 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 14 21:46:19.898509 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 14 21:46:19.898516 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 14 21:46:19.898523 kernel: audit: initializing netlink subsys (disabled) Jul 14 21:46:19.898530 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Jul 14 21:46:19.898538 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 14 21:46:19.898546 kernel: cpuidle: using governor menu Jul 14 21:46:19.898553 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 14 21:46:19.898561 kernel: ASID allocator initialised with 32768 entries Jul 14 21:46:19.898568 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 14 21:46:19.898575 kernel: Serial: AMBA PL011 UART driver Jul 14 21:46:19.898582 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 14 21:46:19.898589 kernel: Modules: 0 pages in range for non-PLT usage Jul 14 21:46:19.898597 kernel: Modules: 509008 pages in range for PLT usage Jul 14 21:46:19.898604 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 14 21:46:19.898612 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 14 21:46:19.898620 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 14 21:46:19.898627 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 14 21:46:19.898635 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 14 21:46:19.898642 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 14 21:46:19.898649 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 14 21:46:19.898656 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 14 21:46:19.898664 kernel: ACPI: Added _OSI(Module Device) Jul 14 21:46:19.898671 kernel: ACPI: Added _OSI(Processor Device) Jul 14 21:46:19.898679 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 14 21:46:19.898687 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 14 21:46:19.898694 kernel: ACPI: Interpreter enabled Jul 14 21:46:19.898701 kernel: ACPI: Using GIC for interrupt routing Jul 14 21:46:19.898712 kernel: ACPI: MCFG table detected, 1 entries Jul 14 21:46:19.898725 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 14 21:46:19.898733 kernel: printk: console [ttyAMA0] enabled Jul 14 21:46:19.898740 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 14 21:46:19.898876 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 14 21:46:19.898956 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 14 21:46:19.899026 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 14 21:46:19.899089 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 14 21:46:19.899151 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 14 21:46:19.899161 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 14 21:46:19.899168 kernel: PCI host bridge to bus 0000:00 Jul 14 21:46:19.899238 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 14 21:46:19.899299 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 14 21:46:19.899356 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 14 21:46:19.899413 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 14 21:46:19.899538 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 14 21:46:19.899617 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 14 21:46:19.899685 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 14 21:46:19.899778 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 14 21:46:19.899847 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 14 21:46:19.899913 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 14 21:46:19.899978 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 14 21:46:19.900043 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 14 21:46:19.900104 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 14 21:46:19.900161 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 14 21:46:19.900222 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 14 21:46:19.900231 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 14 21:46:19.900239 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 14 21:46:19.900246 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 14 21:46:19.900253 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 14 21:46:19.900261 kernel: iommu: Default domain type: Translated Jul 14 21:46:19.900268 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 14 21:46:19.900276 kernel: efivars: Registered efivars operations Jul 14 21:46:19.900283 kernel: vgaarb: loaded Jul 14 21:46:19.900292 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 14 21:46:19.900300 kernel: VFS: Disk quotas dquot_6.6.0 Jul 14 21:46:19.900307 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 14 21:46:19.900314 kernel: pnp: PnP ACPI init Jul 14 21:46:19.900390 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 14 21:46:19.900401 kernel: pnp: PnP ACPI: found 1 devices Jul 14 21:46:19.900408 kernel: NET: Registered PF_INET protocol family Jul 14 21:46:19.900416 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 14 21:46:19.900425 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 14 21:46:19.900433 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 14 21:46:19.900458 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 14 21:46:19.900465 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 14 21:46:19.900473 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 14 21:46:19.900480 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 21:46:19.900488 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 21:46:19.900495 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 14 21:46:19.900502 kernel: PCI: CLS 0 bytes, default 64 Jul 14 21:46:19.900511 kernel: kvm [1]: HYP mode not available Jul 14 21:46:19.900519 kernel: Initialise system trusted keyrings Jul 14 21:46:19.900526 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 14 21:46:19.900533 kernel: Key type asymmetric registered Jul 14 21:46:19.900540 kernel: Asymmetric key parser 'x509' registered Jul 14 21:46:19.900547 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 14 21:46:19.900555 kernel: io scheduler mq-deadline registered Jul 14 21:46:19.900562 kernel: io scheduler kyber registered Jul 14 21:46:19.900570 kernel: io scheduler bfq registered Jul 14 21:46:19.900579 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 14 21:46:19.900586 kernel: ACPI: button: Power Button [PWRB] Jul 14 21:46:19.900594 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 14 21:46:19.900663 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 14 21:46:19.900673 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 14 21:46:19.900681 kernel: thunder_xcv, ver 1.0 Jul 14 21:46:19.900688 kernel: thunder_bgx, ver 1.0 Jul 14 21:46:19.900695 kernel: nicpf, ver 1.0 Jul 14 21:46:19.900703 kernel: nicvf, ver 1.0 Jul 14 21:46:19.900788 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 14 21:46:19.900851 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-14T21:46:19 UTC (1752529579) Jul 14 21:46:19.900861 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 14 21:46:19.900869 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 14 21:46:19.900876 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 14 21:46:19.900883 kernel: watchdog: Hard watchdog permanently disabled Jul 14 21:46:19.900890 kernel: NET: Registered PF_INET6 protocol family Jul 14 21:46:19.900898 kernel: Segment Routing with IPv6 Jul 14 21:46:19.900907 kernel: In-situ OAM (IOAM) with IPv6 Jul 14 21:46:19.900915 kernel: NET: Registered PF_PACKET protocol family Jul 14 21:46:19.900922 kernel: Key type dns_resolver registered Jul 14 21:46:19.900929 kernel: registered taskstats version 1 Jul 14 21:46:19.900936 kernel: Loading compiled-in X.509 certificates Jul 14 21:46:19.900944 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.97-flatcar: 0878f879bf0f15203fd920e9f7d6346db298c301' Jul 14 21:46:19.900951 kernel: Key type .fscrypt registered Jul 14 21:46:19.900958 kernel: Key type fscrypt-provisioning registered Jul 14 21:46:19.900965 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 14 21:46:19.900974 kernel: ima: Allocated hash algorithm: sha1 Jul 14 21:46:19.900981 kernel: ima: No architecture policies found Jul 14 21:46:19.900988 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 14 21:46:19.900995 kernel: clk: Disabling unused clocks Jul 14 21:46:19.901003 kernel: Freeing unused kernel memory: 39424K Jul 14 21:46:19.901010 kernel: Run /init as init process Jul 14 21:46:19.901017 kernel: with arguments: Jul 14 21:46:19.901024 kernel: /init Jul 14 21:46:19.901031 kernel: with environment: Jul 14 21:46:19.901040 kernel: HOME=/ Jul 14 21:46:19.901047 kernel: TERM=linux Jul 14 21:46:19.901054 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 14 21:46:19.901063 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 14 21:46:19.901072 systemd[1]: Detected virtualization kvm. Jul 14 21:46:19.901080 systemd[1]: Detected architecture arm64. Jul 14 21:46:19.901088 systemd[1]: Running in initrd. Jul 14 21:46:19.901097 systemd[1]: No hostname configured, using default hostname. Jul 14 21:46:19.901104 systemd[1]: Hostname set to . Jul 14 21:46:19.901112 systemd[1]: Initializing machine ID from VM UUID. Jul 14 21:46:19.901120 systemd[1]: Queued start job for default target initrd.target. Jul 14 21:46:19.901128 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 21:46:19.901135 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 21:46:19.901144 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 14 21:46:19.901152 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 21:46:19.901161 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 14 21:46:19.901169 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 14 21:46:19.901179 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 14 21:46:19.901187 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 14 21:46:19.901195 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 21:46:19.901203 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 21:46:19.901211 systemd[1]: Reached target paths.target - Path Units. Jul 14 21:46:19.901220 systemd[1]: Reached target slices.target - Slice Units. Jul 14 21:46:19.901228 systemd[1]: Reached target swap.target - Swaps. Jul 14 21:46:19.901236 systemd[1]: Reached target timers.target - Timer Units. Jul 14 21:46:19.901243 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 21:46:19.901251 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 21:46:19.901259 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 14 21:46:19.901267 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 14 21:46:19.901275 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 21:46:19.901283 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 21:46:19.901292 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 21:46:19.901300 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 21:46:19.901308 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 14 21:46:19.901315 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 21:46:19.901323 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 14 21:46:19.901331 systemd[1]: Starting systemd-fsck-usr.service... Jul 14 21:46:19.901339 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 21:46:19.901346 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 21:46:19.901356 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 21:46:19.901364 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 14 21:46:19.901372 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 21:46:19.901380 systemd[1]: Finished systemd-fsck-usr.service. Jul 14 21:46:19.901405 systemd-journald[237]: Collecting audit messages is disabled. Jul 14 21:46:19.901425 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 14 21:46:19.901433 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 21:46:19.901451 kernel: Bridge firewalling registered Jul 14 21:46:19.901458 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:46:19.901468 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 21:46:19.901476 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 21:46:19.901485 systemd-journald[237]: Journal started Jul 14 21:46:19.901503 systemd-journald[237]: Runtime Journal (/run/log/journal/75be876a813740b9b357e5d0c35a95d7) is 5.9M, max 47.3M, 41.4M free. Jul 14 21:46:19.883398 systemd-modules-load[239]: Inserted module 'overlay' Jul 14 21:46:19.903546 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 21:46:19.896641 systemd-modules-load[239]: Inserted module 'br_netfilter' Jul 14 21:46:19.905379 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 21:46:19.906931 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 21:46:19.910658 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 21:46:19.912170 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 21:46:19.920013 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 21:46:19.921312 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 21:46:19.924286 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 21:46:19.934616 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 21:46:19.935597 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 21:46:19.939522 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 14 21:46:19.954225 dracut-cmdline[280]: dracut-dracut-053 Jul 14 21:46:19.957208 dracut-cmdline[280]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=219fd31147cccfc1f4834c1854a4109714661cabce52e86d5c93000af393c45b Jul 14 21:46:19.972569 systemd-resolved[276]: Positive Trust Anchors: Jul 14 21:46:19.972585 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 21:46:19.972618 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 21:46:19.979119 systemd-resolved[276]: Defaulting to hostname 'linux'. Jul 14 21:46:19.980353 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 21:46:19.981285 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 21:46:20.031466 kernel: SCSI subsystem initialized Jul 14 21:46:20.036454 kernel: Loading iSCSI transport class v2.0-870. Jul 14 21:46:20.043452 kernel: iscsi: registered transport (tcp) Jul 14 21:46:20.058461 kernel: iscsi: registered transport (qla4xxx) Jul 14 21:46:20.058483 kernel: QLogic iSCSI HBA Driver Jul 14 21:46:20.099898 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 14 21:46:20.109658 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 14 21:46:20.125668 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 14 21:46:20.125720 kernel: device-mapper: uevent: version 1.0.3 Jul 14 21:46:20.125737 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 14 21:46:20.174504 kernel: raid6: neonx8 gen() 15764 MB/s Jul 14 21:46:20.191459 kernel: raid6: neonx4 gen() 15659 MB/s Jul 14 21:46:20.208454 kernel: raid6: neonx2 gen() 13243 MB/s Jul 14 21:46:20.225460 kernel: raid6: neonx1 gen() 10494 MB/s Jul 14 21:46:20.242461 kernel: raid6: int64x8 gen() 6963 MB/s Jul 14 21:46:20.259460 kernel: raid6: int64x4 gen() 7346 MB/s Jul 14 21:46:20.276460 kernel: raid6: int64x2 gen() 6133 MB/s Jul 14 21:46:20.293451 kernel: raid6: int64x1 gen() 5061 MB/s Jul 14 21:46:20.293464 kernel: raid6: using algorithm neonx8 gen() 15764 MB/s Jul 14 21:46:20.310467 kernel: raid6: .... xor() 11935 MB/s, rmw enabled Jul 14 21:46:20.310494 kernel: raid6: using neon recovery algorithm Jul 14 21:46:20.315454 kernel: xor: measuring software checksum speed Jul 14 21:46:20.315472 kernel: 8regs : 19750 MB/sec Jul 14 21:46:20.316872 kernel: 32regs : 18461 MB/sec Jul 14 21:46:20.316900 kernel: arm64_neon : 27016 MB/sec Jul 14 21:46:20.316918 kernel: xor: using function: arm64_neon (27016 MB/sec) Jul 14 21:46:20.366463 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 14 21:46:20.378508 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 14 21:46:20.394580 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 21:46:20.405427 systemd-udevd[462]: Using default interface naming scheme 'v255'. Jul 14 21:46:20.408568 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 21:46:20.410829 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 14 21:46:20.425544 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Jul 14 21:46:20.452256 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 21:46:20.459598 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 21:46:20.499323 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 21:46:20.508629 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 14 21:46:20.520819 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 14 21:46:20.522026 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 21:46:20.523758 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 21:46:20.525716 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 21:46:20.536605 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 14 21:46:20.547933 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 14 21:46:20.548086 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 14 21:46:20.549532 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 14 21:46:20.551634 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 21:46:20.555211 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 14 21:46:20.555234 kernel: GPT:9289727 != 19775487 Jul 14 21:46:20.555243 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 14 21:46:20.555253 kernel: GPT:9289727 != 19775487 Jul 14 21:46:20.555261 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 14 21:46:20.555270 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:46:20.551760 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 21:46:20.556614 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 21:46:20.558258 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 21:46:20.558383 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:46:20.560365 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 21:46:20.569984 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 21:46:20.575283 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (513) Jul 14 21:46:20.575306 kernel: BTRFS: device fsid a239cc51-2249-4f1a-8861-421a0d84a369 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (509) Jul 14 21:46:20.584463 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:46:20.588968 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 14 21:46:20.593234 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 14 21:46:20.596824 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 14 21:46:20.597737 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 14 21:46:20.605719 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 21:46:20.618583 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 14 21:46:20.620114 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 21:46:20.624429 disk-uuid[551]: Primary Header is updated. Jul 14 21:46:20.624429 disk-uuid[551]: Secondary Entries is updated. Jul 14 21:46:20.624429 disk-uuid[551]: Secondary Header is updated. Jul 14 21:46:20.626770 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:46:20.641910 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 21:46:21.643465 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:46:21.644858 disk-uuid[552]: The operation has completed successfully. Jul 14 21:46:21.665138 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 14 21:46:21.665234 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 14 21:46:21.688597 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 14 21:46:21.691450 sh[574]: Success Jul 14 21:46:21.704588 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 14 21:46:21.731446 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 14 21:46:21.739909 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 14 21:46:21.741337 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 14 21:46:21.750847 kernel: BTRFS info (device dm-0): first mount of filesystem a239cc51-2249-4f1a-8861-421a0d84a369 Jul 14 21:46:21.750888 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:46:21.750899 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 14 21:46:21.752461 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 14 21:46:21.752496 kernel: BTRFS info (device dm-0): using free space tree Jul 14 21:46:21.756182 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 14 21:46:21.757410 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 14 21:46:21.766598 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 14 21:46:21.767993 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 14 21:46:21.774995 kernel: BTRFS info (device vda6): first mount of filesystem a813e27e-7b70-4c75-b1e9-ccef805dad93 Jul 14 21:46:21.775041 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:46:21.775053 kernel: BTRFS info (device vda6): using free space tree Jul 14 21:46:21.777477 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 21:46:21.786493 kernel: BTRFS info (device vda6): last unmount of filesystem a813e27e-7b70-4c75-b1e9-ccef805dad93 Jul 14 21:46:21.786562 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 14 21:46:21.795040 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 14 21:46:21.806646 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 14 21:46:21.863899 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 21:46:21.873592 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 21:46:21.902064 systemd-networkd[768]: lo: Link UP Jul 14 21:46:21.902074 systemd-networkd[768]: lo: Gained carrier Jul 14 21:46:21.902836 systemd-networkd[768]: Enumeration completed Jul 14 21:46:21.906280 ignition[669]: Ignition 2.19.0 Jul 14 21:46:21.902927 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 21:46:21.906287 ignition[669]: Stage: fetch-offline Jul 14 21:46:21.903552 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 21:46:21.906327 ignition[669]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:46:21.903556 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 21:46:21.906335 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:46:21.903861 systemd[1]: Reached target network.target - Network. Jul 14 21:46:21.906525 ignition[669]: parsed url from cmdline: "" Jul 14 21:46:21.905567 systemd-networkd[768]: eth0: Link UP Jul 14 21:46:21.906528 ignition[669]: no config URL provided Jul 14 21:46:21.905571 systemd-networkd[768]: eth0: Gained carrier Jul 14 21:46:21.906532 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" Jul 14 21:46:21.905578 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 21:46:21.906540 ignition[669]: no config at "/usr/lib/ignition/user.ign" Jul 14 21:46:21.906564 ignition[669]: op(1): [started] loading QEMU firmware config module Jul 14 21:46:21.906569 ignition[669]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 14 21:46:21.913721 ignition[669]: op(1): [finished] loading QEMU firmware config module Jul 14 21:46:21.925486 systemd-networkd[768]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 21:46:21.957487 ignition[669]: parsing config with SHA512: 1d12928cb1d8150ef74f4beba6ee21f5c3ee3898ca3dce64db93df85249d44afd4031f51e15ab9756512af5862f9a41cffc6622310d716916c4ba5dfa9fc0b83 Jul 14 21:46:21.961586 unknown[669]: fetched base config from "system" Jul 14 21:46:21.961596 unknown[669]: fetched user config from "qemu" Jul 14 21:46:21.962044 ignition[669]: fetch-offline: fetch-offline passed Jul 14 21:46:21.962112 ignition[669]: Ignition finished successfully Jul 14 21:46:21.963855 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 21:46:21.965043 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 14 21:46:21.977611 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 14 21:46:21.988121 ignition[774]: Ignition 2.19.0 Jul 14 21:46:21.988132 ignition[774]: Stage: kargs Jul 14 21:46:21.988721 ignition[774]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:46:21.988740 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:46:21.989640 ignition[774]: kargs: kargs passed Jul 14 21:46:21.989690 ignition[774]: Ignition finished successfully Jul 14 21:46:21.991831 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 14 21:46:22.005619 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 14 21:46:22.015490 ignition[782]: Ignition 2.19.0 Jul 14 21:46:22.015503 ignition[782]: Stage: disks Jul 14 21:46:22.015818 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:46:22.015828 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:46:22.019013 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 14 21:46:22.016699 ignition[782]: disks: disks passed Jul 14 21:46:22.020036 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 14 21:46:22.016760 ignition[782]: Ignition finished successfully Jul 14 21:46:22.021268 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 14 21:46:22.022425 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 21:46:22.023792 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 21:46:22.024932 systemd[1]: Reached target basic.target - Basic System. Jul 14 21:46:22.036457 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 14 21:46:22.050777 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 14 21:46:22.066831 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 14 21:46:22.073596 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 14 21:46:22.115210 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 14 21:46:22.116341 kernel: EXT4-fs (vda9): mounted filesystem a9f35e2f-e295-4589-8fb4-4b611a8bb71c r/w with ordered data mode. Quota mode: none. Jul 14 21:46:22.116253 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 14 21:46:22.128527 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 21:46:22.129903 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 14 21:46:22.131027 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 14 21:46:22.131065 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 14 21:46:22.136004 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (800) Jul 14 21:46:22.131085 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 21:46:22.138931 kernel: BTRFS info (device vda6): first mount of filesystem a813e27e-7b70-4c75-b1e9-ccef805dad93 Jul 14 21:46:22.138948 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:46:22.138958 kernel: BTRFS info (device vda6): using free space tree Jul 14 21:46:22.137289 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 14 21:46:22.140368 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 14 21:46:22.142173 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 21:46:22.142814 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 21:46:22.180892 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Jul 14 21:46:22.184885 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Jul 14 21:46:22.187903 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Jul 14 21:46:22.191095 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Jul 14 21:46:22.260350 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 14 21:46:22.270555 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 14 21:46:22.271923 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 14 21:46:22.276563 kernel: BTRFS info (device vda6): last unmount of filesystem a813e27e-7b70-4c75-b1e9-ccef805dad93 Jul 14 21:46:22.297062 ignition[914]: INFO : Ignition 2.19.0 Jul 14 21:46:22.297062 ignition[914]: INFO : Stage: mount Jul 14 21:46:22.298403 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:46:22.298403 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:46:22.298403 ignition[914]: INFO : mount: mount passed Jul 14 21:46:22.298403 ignition[914]: INFO : Ignition finished successfully Jul 14 21:46:22.299825 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 14 21:46:22.301832 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 14 21:46:22.312560 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 14 21:46:22.750104 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 14 21:46:22.762685 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 21:46:22.767466 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (929) Jul 14 21:46:22.769020 kernel: BTRFS info (device vda6): first mount of filesystem a813e27e-7b70-4c75-b1e9-ccef805dad93 Jul 14 21:46:22.769042 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:46:22.769514 kernel: BTRFS info (device vda6): using free space tree Jul 14 21:46:22.771447 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 21:46:22.772427 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 21:46:22.787248 ignition[946]: INFO : Ignition 2.19.0 Jul 14 21:46:22.787248 ignition[946]: INFO : Stage: files Jul 14 21:46:22.788394 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:46:22.788394 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:46:22.788394 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Jul 14 21:46:22.790943 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 14 21:46:22.790943 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 14 21:46:22.790943 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 14 21:46:22.793936 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 14 21:46:22.793936 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 14 21:46:22.793936 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 14 21:46:22.793936 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 14 21:46:22.791331 unknown[946]: wrote ssh authorized keys file for user: core Jul 14 21:46:22.886669 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 14 21:46:23.119117 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 14 21:46:23.119117 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 14 21:46:23.121871 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 14 21:46:23.121871 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 14 21:46:23.121871 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 14 21:46:23.121871 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 21:46:23.121871 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 21:46:23.121871 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 21:46:23.121871 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 21:46:23.121871 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 21:46:23.121871 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 21:46:23.121871 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 14 21:46:23.121871 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 14 21:46:23.121871 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 14 21:46:23.121871 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 14 21:46:23.626731 systemd-networkd[768]: eth0: Gained IPv6LL Jul 14 21:46:23.650283 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 14 21:46:24.043233 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 14 21:46:24.043233 ignition[946]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 14 21:46:24.046255 ignition[946]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 21:46:24.046255 ignition[946]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 21:46:24.046255 ignition[946]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 14 21:46:24.046255 ignition[946]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 14 21:46:24.046255 ignition[946]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 21:46:24.046255 ignition[946]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 21:46:24.046255 ignition[946]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 14 21:46:24.046255 ignition[946]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 14 21:46:24.097984 ignition[946]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 21:46:24.101736 ignition[946]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 21:46:24.103608 ignition[946]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 14 21:46:24.103608 ignition[946]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 14 21:46:24.103608 ignition[946]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 14 21:46:24.103608 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 14 21:46:24.103608 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 14 21:46:24.103608 ignition[946]: INFO : files: files passed Jul 14 21:46:24.103608 ignition[946]: INFO : Ignition finished successfully Jul 14 21:46:24.104410 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 14 21:46:24.115636 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 14 21:46:24.117259 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 14 21:46:24.121533 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 14 21:46:24.121649 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 14 21:46:24.125238 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory Jul 14 21:46:24.127293 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 21:46:24.127293 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 14 21:46:24.130041 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 21:46:24.131173 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 21:46:24.133144 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 14 21:46:24.144671 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 14 21:46:24.165183 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 14 21:46:24.166041 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 14 21:46:24.167191 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 14 21:46:24.168537 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 14 21:46:24.169914 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 14 21:46:24.170758 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 14 21:46:24.188264 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 21:46:24.195620 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 14 21:46:24.204360 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 14 21:46:24.205369 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 21:46:24.206922 systemd[1]: Stopped target timers.target - Timer Units. Jul 14 21:46:24.208200 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 14 21:46:24.208325 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 21:46:24.210182 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 14 21:46:24.211661 systemd[1]: Stopped target basic.target - Basic System. Jul 14 21:46:24.212967 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 14 21:46:24.214258 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 21:46:24.215710 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 14 21:46:24.217149 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 14 21:46:24.218527 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 21:46:24.220127 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 14 21:46:24.221635 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 14 21:46:24.223061 systemd[1]: Stopped target swap.target - Swaps. Jul 14 21:46:24.224226 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 14 21:46:24.224346 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 14 21:46:24.226166 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 14 21:46:24.227650 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 21:46:24.229011 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 14 21:46:24.232502 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 21:46:24.233387 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 14 21:46:24.233520 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 14 21:46:24.235536 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 14 21:46:24.235647 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 21:46:24.237047 systemd[1]: Stopped target paths.target - Path Units. Jul 14 21:46:24.238135 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 14 21:46:24.241534 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 21:46:24.242453 systemd[1]: Stopped target slices.target - Slice Units. Jul 14 21:46:24.243975 systemd[1]: Stopped target sockets.target - Socket Units. Jul 14 21:46:24.245232 systemd[1]: iscsid.socket: Deactivated successfully. Jul 14 21:46:24.245316 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 21:46:24.246390 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 14 21:46:24.246480 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 21:46:24.247585 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 14 21:46:24.247686 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 21:46:24.248930 systemd[1]: ignition-files.service: Deactivated successfully. Jul 14 21:46:24.249024 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 14 21:46:24.260704 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 14 21:46:24.261340 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 14 21:46:24.261471 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 21:46:24.264714 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 14 21:46:24.265350 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 14 21:46:24.265502 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 21:46:24.266892 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 14 21:46:24.267021 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 21:46:24.272342 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 14 21:46:24.273235 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 14 21:46:24.276176 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 14 21:46:24.277369 ignition[1001]: INFO : Ignition 2.19.0 Jul 14 21:46:24.277369 ignition[1001]: INFO : Stage: umount Jul 14 21:46:24.277369 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:46:24.277369 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:46:24.283223 ignition[1001]: INFO : umount: umount passed Jul 14 21:46:24.283223 ignition[1001]: INFO : Ignition finished successfully Jul 14 21:46:24.281174 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 14 21:46:24.281274 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 14 21:46:24.282507 systemd[1]: Stopped target network.target - Network. Jul 14 21:46:24.284366 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 14 21:46:24.284448 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 14 21:46:24.285691 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 14 21:46:24.285743 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 14 21:46:24.288336 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 14 21:46:24.288383 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 14 21:46:24.289497 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 14 21:46:24.289610 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 14 21:46:24.291187 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 14 21:46:24.292410 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 14 21:46:24.298573 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 14 21:46:24.298721 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 14 21:46:24.300492 systemd-networkd[768]: eth0: DHCPv6 lease lost Jul 14 21:46:24.301627 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 14 21:46:24.301746 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 14 21:46:24.303746 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 14 21:46:24.303802 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 14 21:46:24.313558 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 14 21:46:24.314197 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 14 21:46:24.314249 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 21:46:24.315643 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 21:46:24.315683 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 14 21:46:24.316954 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 14 21:46:24.316991 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 14 21:46:24.318521 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 14 21:46:24.318558 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 21:46:24.320072 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 21:46:24.330000 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 14 21:46:24.330138 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 14 21:46:24.334581 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 14 21:46:24.334713 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 14 21:46:24.336299 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 14 21:46:24.336381 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 14 21:46:24.337560 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 14 21:46:24.337687 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 21:46:24.339460 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 14 21:46:24.339519 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 14 21:46:24.340304 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 14 21:46:24.340336 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 21:46:24.341689 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 14 21:46:24.341740 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 14 21:46:24.343766 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 14 21:46:24.343804 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 14 21:46:24.345816 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 21:46:24.345858 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 21:46:24.352693 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 14 21:46:24.354311 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 14 21:46:24.354377 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 21:46:24.355963 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 21:46:24.356161 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:46:24.357799 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 14 21:46:24.358530 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 14 21:46:24.360051 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 14 21:46:24.361774 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 14 21:46:24.370909 systemd[1]: Switching root. Jul 14 21:46:24.401528 systemd-journald[237]: Journal stopped Jul 14 21:46:25.084988 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jul 14 21:46:25.085052 kernel: SELinux: policy capability network_peer_controls=1 Jul 14 21:46:25.085064 kernel: SELinux: policy capability open_perms=1 Jul 14 21:46:25.085078 kernel: SELinux: policy capability extended_socket_class=1 Jul 14 21:46:25.085088 kernel: SELinux: policy capability always_check_network=0 Jul 14 21:46:25.085099 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 14 21:46:25.085109 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 14 21:46:25.085123 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 14 21:46:25.085136 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 14 21:46:25.085147 kernel: audit: type=1403 audit(1752529584.551:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 14 21:46:25.085159 systemd[1]: Successfully loaded SELinux policy in 30.400ms. Jul 14 21:46:25.085176 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.451ms. Jul 14 21:46:25.085204 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 14 21:46:25.085216 systemd[1]: Detected virtualization kvm. Jul 14 21:46:25.085227 systemd[1]: Detected architecture arm64. Jul 14 21:46:25.085238 systemd[1]: Detected first boot. Jul 14 21:46:25.085251 systemd[1]: Initializing machine ID from VM UUID. Jul 14 21:46:25.085263 zram_generator::config[1045]: No configuration found. Jul 14 21:46:25.085275 systemd[1]: Populated /etc with preset unit settings. Jul 14 21:46:25.085287 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 14 21:46:25.085300 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 14 21:46:25.085311 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 14 21:46:25.085323 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 14 21:46:25.085335 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 14 21:46:25.085347 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 14 21:46:25.085358 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 14 21:46:25.085370 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 14 21:46:25.085381 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 14 21:46:25.085393 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 14 21:46:25.085407 systemd[1]: Created slice user.slice - User and Session Slice. Jul 14 21:46:25.085418 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 21:46:25.085430 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 21:46:25.085455 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 14 21:46:25.085469 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 14 21:46:25.085481 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 14 21:46:25.085492 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 21:46:25.085503 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 14 21:46:25.085514 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 21:46:25.085528 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 14 21:46:25.085539 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 14 21:46:25.085550 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 14 21:46:25.085562 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 14 21:46:25.085573 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 21:46:25.085585 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 21:46:25.085596 systemd[1]: Reached target slices.target - Slice Units. Jul 14 21:46:25.085607 systemd[1]: Reached target swap.target - Swaps. Jul 14 21:46:25.085619 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 14 21:46:25.085630 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 14 21:46:25.085642 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 21:46:25.085654 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 21:46:25.085667 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 21:46:25.085678 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 14 21:46:25.085689 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 14 21:46:25.085708 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 14 21:46:25.085723 systemd[1]: Mounting media.mount - External Media Directory... Jul 14 21:46:25.085741 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 14 21:46:25.085752 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 14 21:46:25.085763 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 14 21:46:25.085775 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 14 21:46:25.085787 systemd[1]: Reached target machines.target - Containers. Jul 14 21:46:25.085798 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 14 21:46:25.085809 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 21:46:25.085821 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 21:46:25.085834 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 14 21:46:25.085845 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 21:46:25.085856 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 21:46:25.085867 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 21:46:25.085878 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 14 21:46:25.085890 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 21:46:25.085901 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 14 21:46:25.085913 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 14 21:46:25.085926 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 14 21:46:25.085939 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 14 21:46:25.085950 systemd[1]: Stopped systemd-fsck-usr.service. Jul 14 21:46:25.085961 kernel: fuse: init (API version 7.39) Jul 14 21:46:25.085971 kernel: ACPI: bus type drm_connector registered Jul 14 21:46:25.085981 kernel: loop: module loaded Jul 14 21:46:25.085992 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 21:46:25.086003 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 21:46:25.086014 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 14 21:46:25.086026 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 14 21:46:25.086039 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 21:46:25.086071 systemd-journald[1116]: Collecting audit messages is disabled. Jul 14 21:46:25.086094 systemd[1]: verity-setup.service: Deactivated successfully. Jul 14 21:46:25.086105 systemd[1]: Stopped verity-setup.service. Jul 14 21:46:25.086117 systemd-journald[1116]: Journal started Jul 14 21:46:25.086139 systemd-journald[1116]: Runtime Journal (/run/log/journal/75be876a813740b9b357e5d0c35a95d7) is 5.9M, max 47.3M, 41.4M free. Jul 14 21:46:24.912354 systemd[1]: Queued start job for default target multi-user.target. Jul 14 21:46:24.926243 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 14 21:46:24.926628 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 14 21:46:25.089668 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 21:46:25.090283 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 14 21:46:25.091560 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 14 21:46:25.092732 systemd[1]: Mounted media.mount - External Media Directory. Jul 14 21:46:25.093810 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 14 21:46:25.094962 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 14 21:46:25.096177 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 14 21:46:25.098465 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 14 21:46:25.099809 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 21:46:25.102843 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 14 21:46:25.102993 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 14 21:46:25.104106 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:46:25.104246 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 21:46:25.105359 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 21:46:25.105531 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 21:46:25.107212 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:46:25.107355 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 21:46:25.108739 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 14 21:46:25.108872 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 14 21:46:25.109897 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:46:25.110034 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 21:46:25.111099 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 21:46:25.112200 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 14 21:46:25.113634 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 14 21:46:25.126149 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 14 21:46:25.131548 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 14 21:46:25.133385 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 14 21:46:25.134293 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 14 21:46:25.134328 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 21:46:25.136086 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 14 21:46:25.138029 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 14 21:46:25.139922 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 14 21:46:25.140820 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 21:46:25.142144 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 14 21:46:25.143912 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 14 21:46:25.144803 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 21:46:25.148680 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 14 21:46:25.150385 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 21:46:25.151424 systemd-journald[1116]: Time spent on flushing to /var/log/journal/75be876a813740b9b357e5d0c35a95d7 is 30.765ms for 850 entries. Jul 14 21:46:25.151424 systemd-journald[1116]: System Journal (/var/log/journal/75be876a813740b9b357e5d0c35a95d7) is 8.0M, max 195.6M, 187.6M free. Jul 14 21:46:25.190827 systemd-journald[1116]: Received client request to flush runtime journal. Jul 14 21:46:25.191920 kernel: loop0: detected capacity change from 0 to 114432 Jul 14 21:46:25.191949 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 14 21:46:25.152691 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 21:46:25.154452 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 14 21:46:25.159598 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 14 21:46:25.162291 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 21:46:25.163677 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 14 21:46:25.164813 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 14 21:46:25.166005 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 14 21:46:25.181721 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 14 21:46:25.183039 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 14 21:46:25.184230 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 14 21:46:25.188576 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 14 21:46:25.190941 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 21:46:25.197767 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 14 21:46:25.207222 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 14 21:46:25.208651 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 14 21:46:25.212314 udevadm[1163]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 14 21:46:25.216541 kernel: loop1: detected capacity change from 0 to 211168 Jul 14 21:46:25.226534 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 14 21:46:25.234638 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 21:46:25.241464 kernel: loop2: detected capacity change from 0 to 114328 Jul 14 21:46:25.255837 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Jul 14 21:46:25.256170 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Jul 14 21:46:25.261055 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 21:46:25.277475 kernel: loop3: detected capacity change from 0 to 114432 Jul 14 21:46:25.281480 kernel: loop4: detected capacity change from 0 to 211168 Jul 14 21:46:25.294462 kernel: loop5: detected capacity change from 0 to 114328 Jul 14 21:46:25.297482 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 14 21:46:25.297891 (sd-merge)[1182]: Merged extensions into '/usr'. Jul 14 21:46:25.301259 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... Jul 14 21:46:25.301278 systemd[1]: Reloading... Jul 14 21:46:25.351491 zram_generator::config[1206]: No configuration found. Jul 14 21:46:25.429354 ldconfig[1151]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 14 21:46:25.464642 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:46:25.500147 systemd[1]: Reloading finished in 198 ms. Jul 14 21:46:25.536922 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 14 21:46:25.538070 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 14 21:46:25.551638 systemd[1]: Starting ensure-sysext.service... Jul 14 21:46:25.553392 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 21:46:25.565304 systemd[1]: Reloading requested from client PID 1242 ('systemctl') (unit ensure-sysext.service)... Jul 14 21:46:25.565326 systemd[1]: Reloading... Jul 14 21:46:25.576740 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 14 21:46:25.577126 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 14 21:46:25.578099 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 14 21:46:25.578553 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Jul 14 21:46:25.578679 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Jul 14 21:46:25.582705 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 21:46:25.582717 systemd-tmpfiles[1243]: Skipping /boot Jul 14 21:46:25.590299 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 21:46:25.590316 systemd-tmpfiles[1243]: Skipping /boot Jul 14 21:46:25.623467 zram_generator::config[1270]: No configuration found. Jul 14 21:46:25.686778 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:46:25.722775 systemd[1]: Reloading finished in 157 ms. Jul 14 21:46:25.739493 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 14 21:46:25.750931 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 21:46:25.758374 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 14 21:46:25.760796 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 14 21:46:25.762964 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 14 21:46:25.765801 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 21:46:25.769914 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 21:46:25.776320 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 14 21:46:25.779090 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 21:46:25.780377 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 21:46:25.789382 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 21:46:25.795159 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 21:46:25.796148 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 21:46:25.796941 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:46:25.797098 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 21:46:25.799930 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:46:25.800063 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 21:46:25.803006 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 21:46:25.818397 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 14 21:46:25.820379 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 14 21:46:25.822088 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:46:25.822221 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 21:46:25.831583 systemd-udevd[1312]: Using default interface naming scheme 'v255'. Jul 14 21:46:25.833838 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 21:46:25.835834 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 21:46:25.837959 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 21:46:25.841264 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 21:46:25.844666 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 21:46:25.846049 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 14 21:46:25.848135 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 14 21:46:25.849753 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:46:25.849891 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 21:46:25.851474 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:46:25.851602 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 21:46:25.857396 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 14 21:46:25.861814 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 21:46:25.867937 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 21:46:25.878668 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 21:46:25.881805 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 21:46:25.886459 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 21:46:25.888864 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 21:46:25.893705 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 21:46:25.895642 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 21:46:25.899182 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 14 21:46:25.900678 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 14 21:46:25.906328 systemd[1]: Finished ensure-sysext.service. Jul 14 21:46:25.909737 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:46:25.909892 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 21:46:25.911517 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1338) Jul 14 21:46:25.923017 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:46:25.923171 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 21:46:25.927253 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 21:46:25.927416 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 21:46:25.946856 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:46:25.947449 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 21:46:25.974661 augenrules[1381]: No rules Jul 14 21:46:25.976172 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 14 21:46:25.979750 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 14 21:46:25.984205 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 21:46:25.984277 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 21:46:25.993749 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 14 21:46:25.996358 systemd-resolved[1310]: Positive Trust Anchors: Jul 14 21:46:25.996379 systemd-resolved[1310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 21:46:25.996412 systemd-resolved[1310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 21:46:25.997686 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 21:46:26.007995 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 14 21:46:26.008080 systemd-resolved[1310]: Defaulting to hostname 'linux'. Jul 14 21:46:26.019463 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 21:46:26.022613 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 21:46:26.037864 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 21:46:26.039312 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 14 21:46:26.046843 systemd-networkd[1364]: lo: Link UP Jul 14 21:46:26.047146 systemd-networkd[1364]: lo: Gained carrier Jul 14 21:46:26.048010 systemd-networkd[1364]: Enumeration completed Jul 14 21:46:26.048282 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 21:46:26.049313 systemd[1]: Reached target network.target - Network. Jul 14 21:46:26.055955 systemd-networkd[1364]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 21:46:26.056052 systemd-networkd[1364]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 21:46:26.056875 systemd-networkd[1364]: eth0: Link UP Jul 14 21:46:26.056975 systemd-networkd[1364]: eth0: Gained carrier Jul 14 21:46:26.057047 systemd-networkd[1364]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 21:46:26.062145 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 14 21:46:26.063613 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 14 21:46:26.068794 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 14 21:46:26.069950 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 14 21:46:26.071177 systemd[1]: Reached target time-set.target - System Time Set. Jul 14 21:46:26.086124 systemd-networkd[1364]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 21:46:26.089922 systemd-timesyncd[1389]: Network configuration changed, trying to establish connection. Jul 14 21:46:26.090828 systemd-timesyncd[1389]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 14 21:46:26.090882 systemd-timesyncd[1389]: Initial clock synchronization to Mon 2025-07-14 21:46:25.837553 UTC. Jul 14 21:46:26.091996 lvm[1401]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 21:46:26.104142 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:46:26.124092 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 14 21:46:26.125302 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 21:46:26.126211 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 21:46:26.127091 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 14 21:46:26.128023 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 14 21:46:26.129105 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 14 21:46:26.130074 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 14 21:46:26.131059 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 14 21:46:26.131968 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 14 21:46:26.132005 systemd[1]: Reached target paths.target - Path Units. Jul 14 21:46:26.132869 systemd[1]: Reached target timers.target - Timer Units. Jul 14 21:46:26.134514 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 14 21:46:26.136781 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 14 21:46:26.148619 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 14 21:46:26.151133 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 14 21:46:26.152576 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 14 21:46:26.153505 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 21:46:26.154193 systemd[1]: Reached target basic.target - Basic System. Jul 14 21:46:26.154937 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 14 21:46:26.154968 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 14 21:46:26.155970 systemd[1]: Starting containerd.service - containerd container runtime... Jul 14 21:46:26.157861 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 14 21:46:26.158861 lvm[1409]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 21:46:26.161583 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 14 21:46:26.164728 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 14 21:46:26.165850 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 14 21:46:26.169718 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 14 21:46:26.173250 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 14 21:46:26.175209 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 14 21:46:26.177484 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 14 21:46:26.178618 jq[1412]: false Jul 14 21:46:26.183717 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 14 21:46:26.188930 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 14 21:46:26.189496 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 14 21:46:26.190510 systemd[1]: Starting update-engine.service - Update Engine... Jul 14 21:46:26.193719 extend-filesystems[1413]: Found loop3 Jul 14 21:46:26.193719 extend-filesystems[1413]: Found loop4 Jul 14 21:46:26.193719 extend-filesystems[1413]: Found loop5 Jul 14 21:46:26.193719 extend-filesystems[1413]: Found vda Jul 14 21:46:26.193719 extend-filesystems[1413]: Found vda1 Jul 14 21:46:26.193719 extend-filesystems[1413]: Found vda2 Jul 14 21:46:26.193719 extend-filesystems[1413]: Found vda3 Jul 14 21:46:26.193719 extend-filesystems[1413]: Found usr Jul 14 21:46:26.193719 extend-filesystems[1413]: Found vda4 Jul 14 21:46:26.193719 extend-filesystems[1413]: Found vda6 Jul 14 21:46:26.193719 extend-filesystems[1413]: Found vda7 Jul 14 21:46:26.193719 extend-filesystems[1413]: Found vda9 Jul 14 21:46:26.193719 extend-filesystems[1413]: Checking size of /dev/vda9 Jul 14 21:46:26.194618 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 14 21:46:26.202588 dbus-daemon[1411]: [system] SELinux support is enabled Jul 14 21:46:26.198551 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 14 21:46:26.227852 extend-filesystems[1413]: Resized partition /dev/vda9 Jul 14 21:46:26.231168 jq[1426]: true Jul 14 21:46:26.201922 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 14 21:46:26.231406 extend-filesystems[1438]: resize2fs 1.47.1 (20-May-2024) Jul 14 21:46:26.235551 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1356) Jul 14 21:46:26.235625 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 14 21:46:26.202085 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 14 21:46:26.205651 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 14 21:46:26.212217 systemd[1]: motdgen.service: Deactivated successfully. Jul 14 21:46:26.212495 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 14 21:46:26.216992 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 14 21:46:26.217167 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 14 21:46:26.243552 jq[1436]: true Jul 14 21:46:26.249832 (ntainerd)[1439]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 14 21:46:26.256279 update_engine[1423]: I20250714 21:46:26.255956 1423 main.cc:92] Flatcar Update Engine starting Jul 14 21:46:26.257327 tar[1432]: linux-arm64/LICENSE Jul 14 21:46:26.257327 tar[1432]: linux-arm64/helm Jul 14 21:46:26.263785 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 14 21:46:26.263818 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 14 21:46:26.266002 update_engine[1423]: I20250714 21:46:26.265620 1423 update_check_scheduler.cc:74] Next update check in 5m57s Jul 14 21:46:26.266764 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 14 21:46:26.266794 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 14 21:46:26.268177 systemd[1]: Started update-engine.service - Update Engine. Jul 14 21:46:26.277463 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 14 21:46:26.278718 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 14 21:46:26.306751 extend-filesystems[1438]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 14 21:46:26.306751 extend-filesystems[1438]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 14 21:46:26.306751 extend-filesystems[1438]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 14 21:46:26.308597 systemd-logind[1419]: Watching system buttons on /dev/input/event0 (Power Button) Jul 14 21:46:26.318572 extend-filesystems[1413]: Resized filesystem in /dev/vda9 Jul 14 21:46:26.310137 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 14 21:46:26.312494 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 14 21:46:26.312560 systemd-logind[1419]: New seat seat0. Jul 14 21:46:26.314370 systemd[1]: Started systemd-logind.service - User Login Management. Jul 14 21:46:26.325405 bash[1467]: Updated "/home/core/.ssh/authorized_keys" Jul 14 21:46:26.327523 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 14 21:46:26.329038 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 14 21:46:26.357313 locksmithd[1451]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 14 21:46:26.459100 containerd[1439]: time="2025-07-14T21:46:26.456720080Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 14 21:46:26.489316 containerd[1439]: time="2025-07-14T21:46:26.489158200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:46:26.491065 containerd[1439]: time="2025-07-14T21:46:26.491021840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.97-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:46:26.491280 containerd[1439]: time="2025-07-14T21:46:26.491260320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 14 21:46:26.491364 containerd[1439]: time="2025-07-14T21:46:26.491349880Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 14 21:46:26.491809 containerd[1439]: time="2025-07-14T21:46:26.491787040Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 14 21:46:26.491907 containerd[1439]: time="2025-07-14T21:46:26.491892800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 14 21:46:26.492164 containerd[1439]: time="2025-07-14T21:46:26.492140720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:46:26.492297 containerd[1439]: time="2025-07-14T21:46:26.492258440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:46:26.492728 containerd[1439]: time="2025-07-14T21:46:26.492655680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:46:26.492728 containerd[1439]: time="2025-07-14T21:46:26.492678440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 14 21:46:26.492728 containerd[1439]: time="2025-07-14T21:46:26.492703880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:46:26.493284 containerd[1439]: time="2025-07-14T21:46:26.492716600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 14 21:46:26.493284 containerd[1439]: time="2025-07-14T21:46:26.493176680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:46:26.493742 containerd[1439]: time="2025-07-14T21:46:26.493712480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:46:26.494067 containerd[1439]: time="2025-07-14T21:46:26.494044800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:46:26.494556 containerd[1439]: time="2025-07-14T21:46:26.494210480Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 14 21:46:26.494556 containerd[1439]: time="2025-07-14T21:46:26.494329640Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 14 21:46:26.494556 containerd[1439]: time="2025-07-14T21:46:26.494374280Z" level=info msg="metadata content store policy set" policy=shared Jul 14 21:46:26.498974 containerd[1439]: time="2025-07-14T21:46:26.498945040Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 14 21:46:26.499158 containerd[1439]: time="2025-07-14T21:46:26.499140200Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 14 21:46:26.499314 containerd[1439]: time="2025-07-14T21:46:26.499297400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 14 21:46:26.499473 containerd[1439]: time="2025-07-14T21:46:26.499431800Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 14 21:46:26.500830 containerd[1439]: time="2025-07-14T21:46:26.499547760Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 14 21:46:26.500830 containerd[1439]: time="2025-07-14T21:46:26.499738840Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 14 21:46:26.500830 containerd[1439]: time="2025-07-14T21:46:26.499982080Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 14 21:46:26.500830 containerd[1439]: time="2025-07-14T21:46:26.500107360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 14 21:46:26.500830 containerd[1439]: time="2025-07-14T21:46:26.500124520Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 14 21:46:26.500830 containerd[1439]: time="2025-07-14T21:46:26.500138200Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 14 21:46:26.500830 containerd[1439]: time="2025-07-14T21:46:26.500158200Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 14 21:46:26.500830 containerd[1439]: time="2025-07-14T21:46:26.500177160Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 14 21:46:26.500830 containerd[1439]: time="2025-07-14T21:46:26.500190200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 14 21:46:26.500830 containerd[1439]: time="2025-07-14T21:46:26.500205320Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 14 21:46:26.500830 containerd[1439]: time="2025-07-14T21:46:26.500220600Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 14 21:46:26.500830 containerd[1439]: time="2025-07-14T21:46:26.500234480Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 14 21:46:26.500830 containerd[1439]: time="2025-07-14T21:46:26.500247840Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 14 21:46:26.500830 containerd[1439]: time="2025-07-14T21:46:26.500260680Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 14 21:46:26.501127 containerd[1439]: time="2025-07-14T21:46:26.500286880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 14 21:46:26.501127 containerd[1439]: time="2025-07-14T21:46:26.500309960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 14 21:46:26.501127 containerd[1439]: time="2025-07-14T21:46:26.500323360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 14 21:46:26.501127 containerd[1439]: time="2025-07-14T21:46:26.500337480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 14 21:46:26.501127 containerd[1439]: time="2025-07-14T21:46:26.500349520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 14 21:46:26.501127 containerd[1439]: time="2025-07-14T21:46:26.500363360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 14 21:46:26.501127 containerd[1439]: time="2025-07-14T21:46:26.500376280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 14 21:46:26.501127 containerd[1439]: time="2025-07-14T21:46:26.500401280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 14 21:46:26.501127 containerd[1439]: time="2025-07-14T21:46:26.500415040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 14 21:46:26.501127 containerd[1439]: time="2025-07-14T21:46:26.500429440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 14 21:46:26.501127 containerd[1439]: time="2025-07-14T21:46:26.500467120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 14 21:46:26.501127 containerd[1439]: time="2025-07-14T21:46:26.500480400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 14 21:46:26.501127 containerd[1439]: time="2025-07-14T21:46:26.500494520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 14 21:46:26.501127 containerd[1439]: time="2025-07-14T21:46:26.500511760Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 14 21:46:26.501127 containerd[1439]: time="2025-07-14T21:46:26.500533480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 14 21:46:26.501388 containerd[1439]: time="2025-07-14T21:46:26.500548480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 14 21:46:26.501388 containerd[1439]: time="2025-07-14T21:46:26.500559680Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 14 21:46:26.502192 containerd[1439]: time="2025-07-14T21:46:26.502161640Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 14 21:46:26.502342 containerd[1439]: time="2025-07-14T21:46:26.502321800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 14 21:46:26.502395 containerd[1439]: time="2025-07-14T21:46:26.502382280Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 14 21:46:26.502520 containerd[1439]: time="2025-07-14T21:46:26.502449120Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 14 21:46:26.502596 containerd[1439]: time="2025-07-14T21:46:26.502580040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 14 21:46:26.502665 containerd[1439]: time="2025-07-14T21:46:26.502643600Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 14 21:46:26.502730 containerd[1439]: time="2025-07-14T21:46:26.502717520Z" level=info msg="NRI interface is disabled by configuration." Jul 14 21:46:26.502837 containerd[1439]: time="2025-07-14T21:46:26.502777520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 14 21:46:26.503645 containerd[1439]: time="2025-07-14T21:46:26.503527880Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 14 21:46:26.503913 containerd[1439]: time="2025-07-14T21:46:26.503890840Z" level=info msg="Connect containerd service" Jul 14 21:46:26.504634 containerd[1439]: time="2025-07-14T21:46:26.504032000Z" level=info msg="using legacy CRI server" Jul 14 21:46:26.504634 containerd[1439]: time="2025-07-14T21:46:26.504044200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 14 21:46:26.504634 containerd[1439]: time="2025-07-14T21:46:26.504151520Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 14 21:46:26.505928 containerd[1439]: time="2025-07-14T21:46:26.505845560Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 21:46:26.506631 containerd[1439]: time="2025-07-14T21:46:26.506608560Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 14 21:46:26.506838 containerd[1439]: time="2025-07-14T21:46:26.506811040Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 14 21:46:26.507266 containerd[1439]: time="2025-07-14T21:46:26.507072680Z" level=info msg="Start subscribing containerd event" Jul 14 21:46:26.507266 containerd[1439]: time="2025-07-14T21:46:26.507123280Z" level=info msg="Start recovering state" Jul 14 21:46:26.507266 containerd[1439]: time="2025-07-14T21:46:26.507193480Z" level=info msg="Start event monitor" Jul 14 21:46:26.507266 containerd[1439]: time="2025-07-14T21:46:26.507204560Z" level=info msg="Start snapshots syncer" Jul 14 21:46:26.507387 containerd[1439]: time="2025-07-14T21:46:26.507371480Z" level=info msg="Start cni network conf syncer for default" Jul 14 21:46:26.507514 containerd[1439]: time="2025-07-14T21:46:26.507496920Z" level=info msg="Start streaming server" Jul 14 21:46:26.507843 systemd[1]: Started containerd.service - containerd container runtime. Jul 14 21:46:26.510368 containerd[1439]: time="2025-07-14T21:46:26.509425680Z" level=info msg="containerd successfully booted in 0.054996s" Jul 14 21:46:26.668242 tar[1432]: linux-arm64/README.md Jul 14 21:46:26.681481 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 14 21:46:26.715625 sshd_keygen[1433]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 14 21:46:26.740066 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 14 21:46:26.751765 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 14 21:46:26.757205 systemd[1]: issuegen.service: Deactivated successfully. Jul 14 21:46:26.757401 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 14 21:46:26.760166 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 14 21:46:26.772835 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 14 21:46:26.777586 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 14 21:46:26.779544 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 14 21:46:26.780720 systemd[1]: Reached target getty.target - Login Prompts. Jul 14 21:46:27.594557 systemd-networkd[1364]: eth0: Gained IPv6LL Jul 14 21:46:27.596993 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 14 21:46:27.598542 systemd[1]: Reached target network-online.target - Network is Online. Jul 14 21:46:27.608658 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 14 21:46:27.610736 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:46:27.612432 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 14 21:46:27.626626 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 14 21:46:27.626814 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 14 21:46:27.628520 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 14 21:46:27.636178 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 14 21:46:28.186850 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:46:28.188008 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 14 21:46:28.189048 systemd[1]: Startup finished in 549ms (kernel) + 4.851s (initrd) + 3.671s (userspace) = 9.073s. Jul 14 21:46:28.190405 (kubelet)[1525]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 21:46:28.602695 kubelet[1525]: E0714 21:46:28.602627 1525 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 21:46:28.605156 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 21:46:28.605305 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 21:46:32.656059 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 14 21:46:32.657223 systemd[1]: Started sshd@0-10.0.0.35:22-10.0.0.1:32984.service - OpenSSH per-connection server daemon (10.0.0.1:32984). Jul 14 21:46:32.719062 sshd[1538]: Accepted publickey for core from 10.0.0.1 port 32984 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:46:32.720358 sshd[1538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:46:32.729536 systemd-logind[1419]: New session 1 of user core. Jul 14 21:46:32.730638 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 14 21:46:32.748021 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 14 21:46:32.757413 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 14 21:46:32.762078 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 14 21:46:32.770788 (systemd)[1542]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:46:32.856752 systemd[1542]: Queued start job for default target default.target. Jul 14 21:46:32.868411 systemd[1542]: Created slice app.slice - User Application Slice. Jul 14 21:46:32.868465 systemd[1542]: Reached target paths.target - Paths. Jul 14 21:46:32.868478 systemd[1542]: Reached target timers.target - Timers. Jul 14 21:46:32.869786 systemd[1542]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 14 21:46:32.882300 systemd[1542]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 14 21:46:32.882414 systemd[1542]: Reached target sockets.target - Sockets. Jul 14 21:46:32.882447 systemd[1542]: Reached target basic.target - Basic System. Jul 14 21:46:32.882480 systemd[1542]: Reached target default.target - Main User Target. Jul 14 21:46:32.882506 systemd[1542]: Startup finished in 103ms. Jul 14 21:46:32.883035 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 14 21:46:32.885328 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 14 21:46:32.949205 systemd[1]: Started sshd@1-10.0.0.35:22-10.0.0.1:32998.service - OpenSSH per-connection server daemon (10.0.0.1:32998). Jul 14 21:46:32.983327 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 32998 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:46:32.984947 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:46:32.989425 systemd-logind[1419]: New session 2 of user core. Jul 14 21:46:32.998643 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 14 21:46:33.052994 sshd[1553]: pam_unix(sshd:session): session closed for user core Jul 14 21:46:33.068201 systemd[1]: sshd@1-10.0.0.35:22-10.0.0.1:32998.service: Deactivated successfully. Jul 14 21:46:33.070035 systemd[1]: session-2.scope: Deactivated successfully. Jul 14 21:46:33.073297 systemd-logind[1419]: Session 2 logged out. Waiting for processes to exit. Jul 14 21:46:33.074098 systemd[1]: Started sshd@2-10.0.0.35:22-10.0.0.1:33012.service - OpenSSH per-connection server daemon (10.0.0.1:33012). Jul 14 21:46:33.075041 systemd-logind[1419]: Removed session 2. Jul 14 21:46:33.110642 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 33012 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:46:33.112142 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:46:33.118285 systemd-logind[1419]: New session 3 of user core. Jul 14 21:46:33.127668 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 14 21:46:33.179702 sshd[1560]: pam_unix(sshd:session): session closed for user core Jul 14 21:46:33.192811 systemd[1]: sshd@2-10.0.0.35:22-10.0.0.1:33012.service: Deactivated successfully. Jul 14 21:46:33.194085 systemd[1]: session-3.scope: Deactivated successfully. Jul 14 21:46:33.195344 systemd-logind[1419]: Session 3 logged out. Waiting for processes to exit. Jul 14 21:46:33.196389 systemd[1]: Started sshd@3-10.0.0.35:22-10.0.0.1:33028.service - OpenSSH per-connection server daemon (10.0.0.1:33028). Jul 14 21:46:33.197082 systemd-logind[1419]: Removed session 3. Jul 14 21:46:33.231607 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 33028 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:46:33.232986 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:46:33.236626 systemd-logind[1419]: New session 4 of user core. Jul 14 21:46:33.252644 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 14 21:46:33.307696 sshd[1567]: pam_unix(sshd:session): session closed for user core Jul 14 21:46:33.316944 systemd[1]: sshd@3-10.0.0.35:22-10.0.0.1:33028.service: Deactivated successfully. Jul 14 21:46:33.318403 systemd[1]: session-4.scope: Deactivated successfully. Jul 14 21:46:33.320766 systemd-logind[1419]: Session 4 logged out. Waiting for processes to exit. Jul 14 21:46:33.331767 systemd[1]: Started sshd@4-10.0.0.35:22-10.0.0.1:33042.service - OpenSSH per-connection server daemon (10.0.0.1:33042). Jul 14 21:46:33.332851 systemd-logind[1419]: Removed session 4. Jul 14 21:46:33.361202 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 33042 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:46:33.362517 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:46:33.366513 systemd-logind[1419]: New session 5 of user core. Jul 14 21:46:33.375602 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 14 21:46:33.433803 sudo[1577]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 14 21:46:33.434089 sudo[1577]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 21:46:33.446146 sudo[1577]: pam_unix(sudo:session): session closed for user root Jul 14 21:46:33.447769 sshd[1574]: pam_unix(sshd:session): session closed for user core Jul 14 21:46:33.461763 systemd[1]: sshd@4-10.0.0.35:22-10.0.0.1:33042.service: Deactivated successfully. Jul 14 21:46:33.463154 systemd[1]: session-5.scope: Deactivated successfully. Jul 14 21:46:33.464999 systemd-logind[1419]: Session 5 logged out. Waiting for processes to exit. Jul 14 21:46:33.466218 systemd[1]: Started sshd@5-10.0.0.35:22-10.0.0.1:33046.service - OpenSSH per-connection server daemon (10.0.0.1:33046). Jul 14 21:46:33.466975 systemd-logind[1419]: Removed session 5. Jul 14 21:46:33.499239 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 33046 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:46:33.500324 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:46:33.504063 systemd-logind[1419]: New session 6 of user core. Jul 14 21:46:33.512592 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 14 21:46:33.566871 sudo[1586]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 14 21:46:33.567515 sudo[1586]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 21:46:33.570415 sudo[1586]: pam_unix(sudo:session): session closed for user root Jul 14 21:46:33.574801 sudo[1585]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 14 21:46:33.575074 sudo[1585]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 21:46:33.591750 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 14 21:46:33.592813 auditctl[1589]: No rules Jul 14 21:46:33.593654 systemd[1]: audit-rules.service: Deactivated successfully. Jul 14 21:46:33.594531 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 14 21:46:33.596157 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 14 21:46:33.618929 augenrules[1607]: No rules Jul 14 21:46:33.619767 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 14 21:46:33.621279 sudo[1585]: pam_unix(sudo:session): session closed for user root Jul 14 21:46:33.622755 sshd[1582]: pam_unix(sshd:session): session closed for user core Jul 14 21:46:33.632728 systemd[1]: sshd@5-10.0.0.35:22-10.0.0.1:33046.service: Deactivated successfully. Jul 14 21:46:33.634193 systemd[1]: session-6.scope: Deactivated successfully. Jul 14 21:46:33.635572 systemd-logind[1419]: Session 6 logged out. Waiting for processes to exit. Jul 14 21:46:33.647745 systemd[1]: Started sshd@6-10.0.0.35:22-10.0.0.1:33048.service - OpenSSH per-connection server daemon (10.0.0.1:33048). Jul 14 21:46:33.648699 systemd-logind[1419]: Removed session 6. Jul 14 21:46:33.676942 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 33048 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:46:33.678104 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:46:33.681977 systemd-logind[1419]: New session 7 of user core. Jul 14 21:46:33.692581 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 14 21:46:33.741656 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 14 21:46:33.742203 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 21:46:34.065762 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 14 21:46:34.065892 (dockerd)[1636]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 14 21:46:34.334448 dockerd[1636]: time="2025-07-14T21:46:34.334382146Z" level=info msg="Starting up" Jul 14 21:46:34.486801 dockerd[1636]: time="2025-07-14T21:46:34.486753135Z" level=info msg="Loading containers: start." Jul 14 21:46:34.581487 kernel: Initializing XFRM netlink socket Jul 14 21:46:34.658429 systemd-networkd[1364]: docker0: Link UP Jul 14 21:46:34.674739 dockerd[1636]: time="2025-07-14T21:46:34.674693199Z" level=info msg="Loading containers: done." Jul 14 21:46:34.687584 dockerd[1636]: time="2025-07-14T21:46:34.687541791Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 14 21:46:34.687747 dockerd[1636]: time="2025-07-14T21:46:34.687656666Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 14 21:46:34.687783 dockerd[1636]: time="2025-07-14T21:46:34.687771303Z" level=info msg="Daemon has completed initialization" Jul 14 21:46:34.687816 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck738156954-merged.mount: Deactivated successfully. Jul 14 21:46:34.719924 dockerd[1636]: time="2025-07-14T21:46:34.719796219Z" level=info msg="API listen on /run/docker.sock" Jul 14 21:46:34.720061 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 14 21:46:35.185997 containerd[1439]: time="2025-07-14T21:46:35.185759809Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 14 21:46:35.884221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1359704760.mount: Deactivated successfully. Jul 14 21:46:36.769192 containerd[1439]: time="2025-07-14T21:46:36.769143306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:46:36.770130 containerd[1439]: time="2025-07-14T21:46:36.769661686Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351718" Jul 14 21:46:36.770814 containerd[1439]: time="2025-07-14T21:46:36.770761996Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:46:36.775454 containerd[1439]: time="2025-07-14T21:46:36.774388558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:46:36.775650 containerd[1439]: time="2025-07-14T21:46:36.775610698Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 1.589808694s" Jul 14 21:46:36.775730 containerd[1439]: time="2025-07-14T21:46:36.775715253Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 14 21:46:36.779492 containerd[1439]: time="2025-07-14T21:46:36.779452631Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 14 21:46:37.845225 containerd[1439]: time="2025-07-14T21:46:37.845179195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:46:37.846114 containerd[1439]: time="2025-07-14T21:46:37.845889590Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537625" Jul 14 21:46:37.846840 containerd[1439]: time="2025-07-14T21:46:37.846803637Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:46:37.852103 containerd[1439]: time="2025-07-14T21:46:37.852035869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:46:37.853261 containerd[1439]: time="2025-07-14T21:46:37.853214931Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.073710466s" Jul 14 21:46:37.853261 containerd[1439]: time="2025-07-14T21:46:37.853252257Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 14 21:46:37.853730 containerd[1439]: time="2025-07-14T21:46:37.853707832Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 14 21:46:38.832899 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 14 21:46:38.840639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:46:38.952015 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:46:38.955983 (kubelet)[1853]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 21:46:39.011796 kubelet[1853]: E0714 21:46:39.011721 1853 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 21:46:39.015137 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 21:46:39.015297 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 21:46:39.052878 containerd[1439]: time="2025-07-14T21:46:39.052834020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:46:39.054058 containerd[1439]: time="2025-07-14T21:46:39.053547483Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293517" Jul 14 21:46:39.054385 containerd[1439]: time="2025-07-14T21:46:39.054330460Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:46:39.057150 containerd[1439]: time="2025-07-14T21:46:39.057101682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:46:39.059407 containerd[1439]: time="2025-07-14T21:46:39.059275258Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.20553603s" Jul 14 21:46:39.059407 containerd[1439]: time="2025-07-14T21:46:39.059312499Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 14 21:46:39.059937 containerd[1439]: time="2025-07-14T21:46:39.059911377Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 14 21:46:40.165391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2754204291.mount: Deactivated successfully. Jul 14 21:46:40.618584 containerd[1439]: time="2025-07-14T21:46:40.618514359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:46:40.619398 containerd[1439]: time="2025-07-14T21:46:40.619330621Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199474" Jul 14 21:46:40.620065 containerd[1439]: time="2025-07-14T21:46:40.620012755Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:46:40.622236 containerd[1439]: time="2025-07-14T21:46:40.622189745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:46:40.622921 containerd[1439]: time="2025-07-14T21:46:40.622757878Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.562735247s" Jul 14 21:46:40.622921 containerd[1439]: time="2025-07-14T21:46:40.622793598Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 14 21:46:40.623614 containerd[1439]: time="2025-07-14T21:46:40.623392797Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 14 21:46:41.186892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3958002291.mount: Deactivated successfully. Jul 14 21:46:41.898371 containerd[1439]: time="2025-07-14T21:46:41.898307824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:46:41.899186 containerd[1439]: time="2025-07-14T21:46:41.899156661Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Jul 14 21:46:41.900101 containerd[1439]: time="2025-07-14T21:46:41.900069186Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:46:41.903426 containerd[1439]: time="2025-07-14T21:46:41.903391731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:46:41.904823 containerd[1439]: time="2025-07-14T21:46:41.904788999Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.281289354s" Jul 14 21:46:41.904862 containerd[1439]: time="2025-07-14T21:46:41.904824226Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 14 21:46:41.905413 containerd[1439]: time="2025-07-14T21:46:41.905381453Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 14 21:46:42.309788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4126925251.mount: Deactivated successfully. Jul 14 21:46:42.314307 containerd[1439]: time="2025-07-14T21:46:42.314266670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:46:42.314745 containerd[1439]: time="2025-07-14T21:46:42.314721958Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 14 21:46:42.315504 containerd[1439]: time="2025-07-14T21:46:42.315466286Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:46:42.318545 containerd[1439]: time="2025-07-14T21:46:42.318490476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:46:42.319291 containerd[1439]: time="2025-07-14T21:46:42.319209273Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 413.793941ms" Jul 14 21:46:42.319291 containerd[1439]: time="2025-07-14T21:46:42.319241256Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 14 21:46:42.319721 containerd[1439]: time="2025-07-14T21:46:42.319705067Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 14 21:46:42.812063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount503860762.mount: Deactivated successfully. Jul 14 21:46:44.223584 containerd[1439]: time="2025-07-14T21:46:44.223533064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:46:44.225291 containerd[1439]: time="2025-07-14T21:46:44.225114795Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334601" Jul 14 21:46:44.228328 containerd[1439]: time="2025-07-14T21:46:44.228184128Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:46:44.234008 containerd[1439]: time="2025-07-14T21:46:44.233971544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:46:44.235458 containerd[1439]: time="2025-07-14T21:46:44.235410783Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 1.915614378s" Jul 14 21:46:44.235458 containerd[1439]: time="2025-07-14T21:46:44.235451769Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 14 21:46:49.009499 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:46:49.017666 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:46:49.039756 systemd[1]: Reloading requested from client PID 2013 ('systemctl') (unit session-7.scope)... Jul 14 21:46:49.039773 systemd[1]: Reloading... Jul 14 21:46:49.105478 zram_generator::config[2052]: No configuration found. Jul 14 21:46:49.255617 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:46:49.309047 systemd[1]: Reloading finished in 268 ms. Jul 14 21:46:49.342960 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 14 21:46:49.343023 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 14 21:46:49.343224 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:46:49.345712 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:46:49.469213 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:46:49.474008 (kubelet)[2098]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 21:46:49.505381 kubelet[2098]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:46:49.505381 kubelet[2098]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 14 21:46:49.505381 kubelet[2098]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:46:49.505836 kubelet[2098]: I0714 21:46:49.505413 2098 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 21:46:50.864063 kubelet[2098]: I0714 21:46:50.864010 2098 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 14 21:46:50.864063 kubelet[2098]: I0714 21:46:50.864043 2098 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 21:46:50.864461 kubelet[2098]: I0714 21:46:50.864254 2098 server.go:956] "Client rotation is on, will bootstrap in background" Jul 14 21:46:50.921135 kubelet[2098]: E0714 21:46:50.919493 2098 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 14 21:46:50.921371 kubelet[2098]: I0714 21:46:50.921339 2098 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 21:46:50.927920 kubelet[2098]: E0714 21:46:50.927523 2098 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 21:46:50.927920 kubelet[2098]: I0714 21:46:50.927559 2098 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 21:46:50.930095 kubelet[2098]: I0714 21:46:50.930076 2098 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 21:46:50.931184 kubelet[2098]: I0714 21:46:50.931148 2098 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 21:46:50.931423 kubelet[2098]: I0714 21:46:50.931269 2098 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 21:46:50.931647 kubelet[2098]: I0714 21:46:50.931632 2098 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 21:46:50.931703 kubelet[2098]: I0714 21:46:50.931695 2098 container_manager_linux.go:303] "Creating device plugin manager" Jul 14 21:46:50.931952 kubelet[2098]: I0714 21:46:50.931935 2098 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:46:50.936332 kubelet[2098]: I0714 21:46:50.936309 2098 kubelet.go:480] "Attempting to sync node with API server" Jul 14 21:46:50.936431 kubelet[2098]: I0714 21:46:50.936420 2098 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 21:46:50.936541 kubelet[2098]: I0714 21:46:50.936526 2098 kubelet.go:386] "Adding apiserver pod source" Jul 14 21:46:50.937767 kubelet[2098]: I0714 21:46:50.937750 2098 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 21:46:50.940460 kubelet[2098]: E0714 21:46:50.940412 2098 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 14 21:46:50.940702 kubelet[2098]: E0714 21:46:50.940681 2098 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 14 21:46:50.940873 kubelet[2098]: I0714 21:46:50.940843 2098 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 14 21:46:50.942036 kubelet[2098]: I0714 21:46:50.942001 2098 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 14 21:46:50.943922 kubelet[2098]: W0714 21:46:50.943899 2098 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 14 21:46:50.948247 kubelet[2098]: I0714 21:46:50.948216 2098 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 14 21:46:50.948303 kubelet[2098]: I0714 21:46:50.948264 2098 server.go:1289] "Started kubelet" Jul 14 21:46:50.948462 kubelet[2098]: I0714 21:46:50.948376 2098 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 21:46:50.953524 kubelet[2098]: I0714 21:46:50.952630 2098 server.go:317] "Adding debug handlers to kubelet server" Jul 14 21:46:50.954854 kubelet[2098]: I0714 21:46:50.954822 2098 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 21:46:50.955596 kubelet[2098]: I0714 21:46:50.955362 2098 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 21:46:50.955844 kubelet[2098]: I0714 21:46:50.955810 2098 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 21:46:50.955908 kubelet[2098]: I0714 21:46:50.955889 2098 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 21:46:50.956431 kubelet[2098]: E0714 21:46:50.955183 2098 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.35:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.35:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18523c6556ec4794 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 21:46:50.948233108 +0000 UTC m=+1.470923010,LastTimestamp:2025-07-14 21:46:50.948233108 +0000 UTC m=+1.470923010,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 21:46:50.956431 kubelet[2098]: I0714 21:46:50.956401 2098 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 14 21:46:50.956593 kubelet[2098]: I0714 21:46:50.956573 2098 reconciler.go:26] "Reconciler: start to sync state" Jul 14 21:46:50.956593 kubelet[2098]: I0714 21:46:50.956576 2098 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 14 21:46:50.958569 kubelet[2098]: E0714 21:46:50.957523 2098 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 21:46:50.958569 kubelet[2098]: E0714 21:46:50.957920 2098 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="200ms" Jul 14 21:46:50.958569 kubelet[2098]: E0714 21:46:50.958454 2098 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 14 21:46:50.959776 kubelet[2098]: E0714 21:46:50.959756 2098 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:46:50.960441 kubelet[2098]: I0714 21:46:50.960406 2098 factory.go:223] Registration of the containerd container factory successfully Jul 14 21:46:50.960441 kubelet[2098]: I0714 21:46:50.960426 2098 factory.go:223] Registration of the systemd container factory successfully Jul 14 21:46:50.960544 kubelet[2098]: I0714 21:46:50.960524 2098 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 21:46:50.971457 kubelet[2098]: I0714 21:46:50.971417 2098 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 14 21:46:50.971457 kubelet[2098]: I0714 21:46:50.971459 2098 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 14 21:46:50.971558 kubelet[2098]: I0714 21:46:50.971478 2098 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:46:50.973666 kubelet[2098]: I0714 21:46:50.973613 2098 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 14 21:46:50.974742 kubelet[2098]: I0714 21:46:50.974722 2098 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 14 21:46:50.974742 kubelet[2098]: I0714 21:46:50.974749 2098 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 14 21:46:50.974844 kubelet[2098]: I0714 21:46:50.974771 2098 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 14 21:46:50.974844 kubelet[2098]: I0714 21:46:50.974779 2098 kubelet.go:2436] "Starting kubelet main sync loop" Jul 14 21:46:50.974844 kubelet[2098]: E0714 21:46:50.974827 2098 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 21:46:50.975489 kubelet[2098]: E0714 21:46:50.975307 2098 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 14 21:46:51.046141 kubelet[2098]: I0714 21:46:51.046096 2098 policy_none.go:49] "None policy: Start" Jul 14 21:46:51.046141 kubelet[2098]: I0714 21:46:51.046132 2098 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 14 21:46:51.046141 kubelet[2098]: I0714 21:46:51.046145 2098 state_mem.go:35] "Initializing new in-memory state store" Jul 14 21:46:51.051208 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 14 21:46:51.060213 kubelet[2098]: E0714 21:46:51.060172 2098 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:46:51.063937 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 14 21:46:51.075752 kubelet[2098]: E0714 21:46:51.075710 2098 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 21:46:51.081964 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 14 21:46:51.083069 kubelet[2098]: E0714 21:46:51.082853 2098 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 14 21:46:51.083069 kubelet[2098]: I0714 21:46:51.083048 2098 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 21:46:51.083164 kubelet[2098]: I0714 21:46:51.083060 2098 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 21:46:51.083297 kubelet[2098]: I0714 21:46:51.083216 2098 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 21:46:51.083930 kubelet[2098]: E0714 21:46:51.083853 2098 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 14 21:46:51.083930 kubelet[2098]: E0714 21:46:51.083890 2098 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 14 21:46:51.158838 kubelet[2098]: E0714 21:46:51.158711 2098 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="400ms" Jul 14 21:46:51.184914 kubelet[2098]: I0714 21:46:51.184862 2098 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 21:46:51.185383 kubelet[2098]: E0714 21:46:51.185347 2098 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Jul 14 21:46:51.285965 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 14 21:46:51.310186 kubelet[2098]: E0714 21:46:51.310140 2098 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:46:51.312788 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 14 21:46:51.314814 kubelet[2098]: E0714 21:46:51.314753 2098 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:46:51.316892 systemd[1]: Created slice kubepods-burstable-pod70c6731afdcc3a5de6f5905affddfe31.slice - libcontainer container kubepods-burstable-pod70c6731afdcc3a5de6f5905affddfe31.slice. Jul 14 21:46:51.318323 kubelet[2098]: E0714 21:46:51.318169 2098 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:46:51.357708 kubelet[2098]: I0714 21:46:51.357633 2098 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:46:51.357708 kubelet[2098]: I0714 21:46:51.357673 2098 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:46:51.357708 kubelet[2098]: I0714 21:46:51.357696 2098 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:46:51.357878 kubelet[2098]: I0714 21:46:51.357759 2098 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 14 21:46:51.357878 kubelet[2098]: I0714 21:46:51.357809 2098 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/70c6731afdcc3a5de6f5905affddfe31-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"70c6731afdcc3a5de6f5905affddfe31\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:46:51.357878 kubelet[2098]: I0714 21:46:51.357844 2098 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70c6731afdcc3a5de6f5905affddfe31-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"70c6731afdcc3a5de6f5905affddfe31\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:46:51.357878 kubelet[2098]: I0714 21:46:51.357871 2098 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:46:51.357971 kubelet[2098]: I0714 21:46:51.357899 2098 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:46:51.357971 kubelet[2098]: I0714 21:46:51.357912 2098 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/70c6731afdcc3a5de6f5905affddfe31-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"70c6731afdcc3a5de6f5905affddfe31\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:46:51.387498 kubelet[2098]: I0714 21:46:51.387474 2098 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 21:46:51.387799 kubelet[2098]: E0714 21:46:51.387770 2098 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Jul 14 21:46:51.559530 kubelet[2098]: E0714 21:46:51.559403 2098 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="800ms" Jul 14 21:46:51.611126 kubelet[2098]: E0714 21:46:51.610713 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:51.611416 containerd[1439]: time="2025-07-14T21:46:51.611376927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 14 21:46:51.615905 kubelet[2098]: E0714 21:46:51.615635 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:51.616076 containerd[1439]: time="2025-07-14T21:46:51.616042250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 14 21:46:51.619499 kubelet[2098]: E0714 21:46:51.619430 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:51.619955 containerd[1439]: time="2025-07-14T21:46:51.619920065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:70c6731afdcc3a5de6f5905affddfe31,Namespace:kube-system,Attempt:0,}" Jul 14 21:46:51.789308 kubelet[2098]: I0714 21:46:51.789222 2098 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 21:46:51.789562 kubelet[2098]: E0714 21:46:51.789538 2098 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Jul 14 21:46:51.861358 kubelet[2098]: E0714 21:46:51.861154 2098 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 14 21:46:51.895058 kubelet[2098]: E0714 21:46:51.895003 2098 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 14 21:46:52.003159 kubelet[2098]: E0714 21:46:52.003047 2098 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.35:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.35:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18523c6556ec4794 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 21:46:50.948233108 +0000 UTC m=+1.470923010,LastTimestamp:2025-07-14 21:46:50.948233108 +0000 UTC m=+1.470923010,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 21:46:52.108327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount393848672.mount: Deactivated successfully. Jul 14 21:46:52.114155 containerd[1439]: time="2025-07-14T21:46:52.114025521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:46:52.114962 containerd[1439]: time="2025-07-14T21:46:52.114897261Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:46:52.115710 containerd[1439]: time="2025-07-14T21:46:52.115658565Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:46:52.116472 containerd[1439]: time="2025-07-14T21:46:52.116187690Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 21:46:52.116808 containerd[1439]: time="2025-07-14T21:46:52.116778545Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 14 21:46:52.117396 containerd[1439]: time="2025-07-14T21:46:52.117366564Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 21:46:52.118046 containerd[1439]: time="2025-07-14T21:46:52.117985028Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:46:52.122609 containerd[1439]: time="2025-07-14T21:46:52.122558804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:46:52.123703 containerd[1439]: time="2025-07-14T21:46:52.123648698Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 512.160833ms" Jul 14 21:46:52.124479 containerd[1439]: time="2025-07-14T21:46:52.124428661Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 504.401333ms" Jul 14 21:46:52.127181 containerd[1439]: time="2025-07-14T21:46:52.126962611Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 510.851489ms" Jul 14 21:46:52.283212 containerd[1439]: time="2025-07-14T21:46:52.283112035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:46:52.283212 containerd[1439]: time="2025-07-14T21:46:52.283177402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:46:52.283212 containerd[1439]: time="2025-07-14T21:46:52.283193903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:46:52.283462 containerd[1439]: time="2025-07-14T21:46:52.283277289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:46:52.284607 containerd[1439]: time="2025-07-14T21:46:52.284491684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:46:52.284607 containerd[1439]: time="2025-07-14T21:46:52.284571793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:46:52.284607 containerd[1439]: time="2025-07-14T21:46:52.284586297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:46:52.284900 containerd[1439]: time="2025-07-14T21:46:52.284798379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:46:52.287063 containerd[1439]: time="2025-07-14T21:46:52.286867492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:46:52.287063 containerd[1439]: time="2025-07-14T21:46:52.286912441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:46:52.287063 containerd[1439]: time="2025-07-14T21:46:52.286923389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:46:52.287063 containerd[1439]: time="2025-07-14T21:46:52.287000542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:46:52.305636 systemd[1]: Started cri-containerd-f76cc2a8a346bb44f2c131bd2430f24a776f266d35071ed8aae2339f5dd1ab46.scope - libcontainer container f76cc2a8a346bb44f2c131bd2430f24a776f266d35071ed8aae2339f5dd1ab46. Jul 14 21:46:52.310175 systemd[1]: Started cri-containerd-6236ec1cea4cfeae0ef27b7f5cfb9471b87dbdf8b06b50dee42fa5b04a51d51d.scope - libcontainer container 6236ec1cea4cfeae0ef27b7f5cfb9471b87dbdf8b06b50dee42fa5b04a51d51d. Jul 14 21:46:52.311900 systemd[1]: Started cri-containerd-d2330573f35f10c291be21685f878dc02b279a60e793c481cb063d7de9af08b9.scope - libcontainer container d2330573f35f10c291be21685f878dc02b279a60e793c481cb063d7de9af08b9. Jul 14 21:46:52.348681 containerd[1439]: time="2025-07-14T21:46:52.348608214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"f76cc2a8a346bb44f2c131bd2430f24a776f266d35071ed8aae2339f5dd1ab46\"" Jul 14 21:46:52.348807 containerd[1439]: time="2025-07-14T21:46:52.348640697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"6236ec1cea4cfeae0ef27b7f5cfb9471b87dbdf8b06b50dee42fa5b04a51d51d\"" Jul 14 21:46:52.349639 kubelet[2098]: E0714 21:46:52.349608 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:52.350456 kubelet[2098]: E0714 21:46:52.350401 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:52.353556 containerd[1439]: time="2025-07-14T21:46:52.353520969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:70c6731afdcc3a5de6f5905affddfe31,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2330573f35f10c291be21685f878dc02b279a60e793c481cb063d7de9af08b9\"" Jul 14 21:46:52.354842 containerd[1439]: time="2025-07-14T21:46:52.354796534Z" level=info msg="CreateContainer within sandbox \"f76cc2a8a346bb44f2c131bd2430f24a776f266d35071ed8aae2339f5dd1ab46\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 14 21:46:52.355372 kubelet[2098]: E0714 21:46:52.355268 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:52.355982 containerd[1439]: time="2025-07-14T21:46:52.355945921Z" level=info msg="CreateContainer within sandbox \"6236ec1cea4cfeae0ef27b7f5cfb9471b87dbdf8b06b50dee42fa5b04a51d51d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 14 21:46:52.359201 containerd[1439]: time="2025-07-14T21:46:52.359166579Z" level=info msg="CreateContainer within sandbox \"d2330573f35f10c291be21685f878dc02b279a60e793c481cb063d7de9af08b9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 14 21:46:52.360967 kubelet[2098]: E0714 21:46:52.360931 2098 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="1.6s" Jul 14 21:46:52.371272 containerd[1439]: time="2025-07-14T21:46:52.371151620Z" level=info msg="CreateContainer within sandbox \"f76cc2a8a346bb44f2c131bd2430f24a776f266d35071ed8aae2339f5dd1ab46\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fc0b850564c98028a52b91bd7fb0b55e31e34704d33e4b16db108d1452db078b\"" Jul 14 21:46:52.372490 containerd[1439]: time="2025-07-14T21:46:52.372391585Z" level=info msg="StartContainer for \"fc0b850564c98028a52b91bd7fb0b55e31e34704d33e4b16db108d1452db078b\"" Jul 14 21:46:52.377829 containerd[1439]: time="2025-07-14T21:46:52.377765542Z" level=info msg="CreateContainer within sandbox \"d2330573f35f10c291be21685f878dc02b279a60e793c481cb063d7de9af08b9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"eb0b18afd87de04f2bc38b74426f48556aef3bb5233a4d28588b9935b4bfd12a\"" Jul 14 21:46:52.378322 containerd[1439]: time="2025-07-14T21:46:52.378280442Z" level=info msg="StartContainer for \"eb0b18afd87de04f2bc38b74426f48556aef3bb5233a4d28588b9935b4bfd12a\"" Jul 14 21:46:52.379419 containerd[1439]: time="2025-07-14T21:46:52.379357351Z" level=info msg="CreateContainer within sandbox \"6236ec1cea4cfeae0ef27b7f5cfb9471b87dbdf8b06b50dee42fa5b04a51d51d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1e534a964b35a0ec059d53b6c8de5c06ab248632e6ef2af2bbd5502a25ab65e2\"" Jul 14 21:46:52.379896 containerd[1439]: time="2025-07-14T21:46:52.379851436Z" level=info msg="StartContainer for \"1e534a964b35a0ec059d53b6c8de5c06ab248632e6ef2af2bbd5502a25ab65e2\"" Jul 14 21:46:52.398311 kubelet[2098]: E0714 21:46:52.398270 2098 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 14 21:46:52.398717 systemd[1]: Started cri-containerd-fc0b850564c98028a52b91bd7fb0b55e31e34704d33e4b16db108d1452db078b.scope - libcontainer container fc0b850564c98028a52b91bd7fb0b55e31e34704d33e4b16db108d1452db078b. Jul 14 21:46:52.417635 systemd[1]: Started cri-containerd-1e534a964b35a0ec059d53b6c8de5c06ab248632e6ef2af2bbd5502a25ab65e2.scope - libcontainer container 1e534a964b35a0ec059d53b6c8de5c06ab248632e6ef2af2bbd5502a25ab65e2. Jul 14 21:46:52.422432 systemd[1]: Started cri-containerd-eb0b18afd87de04f2bc38b74426f48556aef3bb5233a4d28588b9935b4bfd12a.scope - libcontainer container eb0b18afd87de04f2bc38b74426f48556aef3bb5233a4d28588b9935b4bfd12a. Jul 14 21:46:52.452403 kubelet[2098]: E0714 21:46:52.444107 2098 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.35:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 14 21:46:52.462003 containerd[1439]: time="2025-07-14T21:46:52.454822678Z" level=info msg="StartContainer for \"fc0b850564c98028a52b91bd7fb0b55e31e34704d33e4b16db108d1452db078b\" returns successfully" Jul 14 21:46:52.462003 containerd[1439]: time="2025-07-14T21:46:52.459027669Z" level=info msg="StartContainer for \"1e534a964b35a0ec059d53b6c8de5c06ab248632e6ef2af2bbd5502a25ab65e2\" returns successfully" Jul 14 21:46:52.490389 containerd[1439]: time="2025-07-14T21:46:52.484337444Z" level=info msg="StartContainer for \"eb0b18afd87de04f2bc38b74426f48556aef3bb5233a4d28588b9935b4bfd12a\" returns successfully" Jul 14 21:46:52.596816 kubelet[2098]: I0714 21:46:52.596034 2098 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 21:46:52.596816 kubelet[2098]: E0714 21:46:52.596424 2098 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Jul 14 21:46:52.988178 kubelet[2098]: E0714 21:46:52.987696 2098 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:46:52.988178 kubelet[2098]: E0714 21:46:52.987819 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:52.989246 kubelet[2098]: E0714 21:46:52.989037 2098 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:46:52.989246 kubelet[2098]: E0714 21:46:52.989161 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:52.990773 kubelet[2098]: E0714 21:46:52.990613 2098 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:46:52.990773 kubelet[2098]: E0714 21:46:52.990730 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:53.994338 kubelet[2098]: E0714 21:46:53.992408 2098 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:46:53.994338 kubelet[2098]: E0714 21:46:53.992541 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:53.994338 kubelet[2098]: E0714 21:46:53.992710 2098 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:46:53.994338 kubelet[2098]: E0714 21:46:53.992785 2098 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:54.198542 kubelet[2098]: I0714 21:46:54.198503 2098 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 21:46:54.587042 kubelet[2098]: E0714 21:46:54.587001 2098 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 14 21:46:54.639696 kubelet[2098]: I0714 21:46:54.639660 2098 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 14 21:46:54.657691 kubelet[2098]: I0714 21:46:54.657650 2098 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 14 21:46:54.714469 kubelet[2098]: E0714 21:46:54.713695 2098 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 14 21:46:54.714469 kubelet[2098]: I0714 21:46:54.713729 2098 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 14 21:46:54.717947 kubelet[2098]: E0714 21:46:54.716232 2098 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 14 21:46:54.717947 kubelet[2098]: I0714 21:46:54.716260 2098 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 21:46:54.720015 kubelet[2098]: E0714 21:46:54.718477 2098 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 14 21:46:54.942193 kubelet[2098]: I0714 21:46:54.941907 2098 apiserver.go:52] "Watching apiserver" Jul 14 21:46:54.957033 kubelet[2098]: I0714 21:46:54.957009 2098 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 14 21:46:56.942711 systemd[1]: Reloading requested from client PID 2384 ('systemctl') (unit session-7.scope)... Jul 14 21:46:56.943033 systemd[1]: Reloading... Jul 14 21:46:57.001560 zram_generator::config[2426]: No configuration found. Jul 14 21:46:57.081450 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:46:57.145484 systemd[1]: Reloading finished in 202 ms. Jul 14 21:46:57.176009 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:46:57.190399 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 21:46:57.190666 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:46:57.190720 systemd[1]: kubelet.service: Consumed 1.901s CPU time, 130.8M memory peak, 0B memory swap peak. Jul 14 21:46:57.198747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:46:57.305673 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:46:57.309932 (kubelet)[2465]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 21:46:57.355722 kubelet[2465]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:46:57.355722 kubelet[2465]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 14 21:46:57.355722 kubelet[2465]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:46:57.356097 kubelet[2465]: I0714 21:46:57.355706 2465 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 21:46:57.364145 kubelet[2465]: I0714 21:46:57.364049 2465 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 14 21:46:57.364145 kubelet[2465]: I0714 21:46:57.364077 2465 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 21:46:57.364366 kubelet[2465]: I0714 21:46:57.364294 2465 server.go:956] "Client rotation is on, will bootstrap in background" Jul 14 21:46:57.365999 kubelet[2465]: I0714 21:46:57.365980 2465 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 14 21:46:57.368496 kubelet[2465]: I0714 21:46:57.368168 2465 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 21:46:57.373184 kubelet[2465]: E0714 21:46:57.373141 2465 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 21:46:57.373403 kubelet[2465]: I0714 21:46:57.373390 2465 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 21:46:57.376603 kubelet[2465]: I0714 21:46:57.376570 2465 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 21:46:57.376812 kubelet[2465]: I0714 21:46:57.376790 2465 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 21:46:57.377036 kubelet[2465]: I0714 21:46:57.376815 2465 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 21:46:57.377113 kubelet[2465]: I0714 21:46:57.377056 2465 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 21:46:57.377113 kubelet[2465]: I0714 21:46:57.377067 2465 container_manager_linux.go:303] "Creating device plugin manager" Jul 14 21:46:57.377113 kubelet[2465]: I0714 21:46:57.377112 2465 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:46:57.377958 kubelet[2465]: I0714 21:46:57.377249 2465 kubelet.go:480] "Attempting to sync node with API server" Jul 14 21:46:57.377958 kubelet[2465]: I0714 21:46:57.377262 2465 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 21:46:57.377958 kubelet[2465]: I0714 21:46:57.377283 2465 kubelet.go:386] "Adding apiserver pod source" Jul 14 21:46:57.377958 kubelet[2465]: I0714 21:46:57.377306 2465 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 21:46:57.379923 kubelet[2465]: I0714 21:46:57.379877 2465 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 14 21:46:57.380815 kubelet[2465]: I0714 21:46:57.380782 2465 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 14 21:46:57.385608 kubelet[2465]: I0714 21:46:57.385574 2465 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 14 21:46:57.385684 kubelet[2465]: I0714 21:46:57.385618 2465 server.go:1289] "Started kubelet" Jul 14 21:46:57.385762 kubelet[2465]: I0714 21:46:57.385705 2465 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 21:46:57.385981 kubelet[2465]: I0714 21:46:57.385927 2465 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 21:46:57.386218 kubelet[2465]: I0714 21:46:57.386196 2465 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 21:46:57.386610 kubelet[2465]: I0714 21:46:57.386585 2465 server.go:317] "Adding debug handlers to kubelet server" Jul 14 21:46:57.388078 kubelet[2465]: I0714 21:46:57.388042 2465 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 21:46:57.390558 kubelet[2465]: I0714 21:46:57.390535 2465 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 21:46:57.400569 kubelet[2465]: I0714 21:46:57.400535 2465 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 14 21:46:57.400774 kubelet[2465]: E0714 21:46:57.400744 2465 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:46:57.401295 kubelet[2465]: I0714 21:46:57.401279 2465 reconciler.go:26] "Reconciler: start to sync state" Jul 14 21:46:57.401387 kubelet[2465]: I0714 21:46:57.401377 2465 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 14 21:46:57.402651 kubelet[2465]: I0714 21:46:57.402625 2465 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 21:46:57.404715 kubelet[2465]: E0714 21:46:57.404387 2465 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 21:46:57.406527 kubelet[2465]: I0714 21:46:57.406503 2465 factory.go:223] Registration of the containerd container factory successfully Jul 14 21:46:57.406527 kubelet[2465]: I0714 21:46:57.406521 2465 factory.go:223] Registration of the systemd container factory successfully Jul 14 21:46:57.414250 kubelet[2465]: I0714 21:46:57.414200 2465 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 14 21:46:57.415596 kubelet[2465]: I0714 21:46:57.415576 2465 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 14 21:46:57.415715 kubelet[2465]: I0714 21:46:57.415705 2465 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 14 21:46:57.415867 kubelet[2465]: I0714 21:46:57.415854 2465 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 14 21:46:57.416005 kubelet[2465]: I0714 21:46:57.415952 2465 kubelet.go:2436] "Starting kubelet main sync loop" Jul 14 21:46:57.416132 kubelet[2465]: E0714 21:46:57.416114 2465 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 21:46:57.442162 kubelet[2465]: I0714 21:46:57.441744 2465 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 14 21:46:57.442162 kubelet[2465]: I0714 21:46:57.441762 2465 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 14 21:46:57.442162 kubelet[2465]: I0714 21:46:57.441782 2465 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:46:57.442162 kubelet[2465]: I0714 21:46:57.441920 2465 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 14 21:46:57.442162 kubelet[2465]: I0714 21:46:57.441930 2465 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 14 21:46:57.442162 kubelet[2465]: I0714 21:46:57.441946 2465 policy_none.go:49] "None policy: Start" Jul 14 21:46:57.442162 kubelet[2465]: I0714 21:46:57.441954 2465 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 14 21:46:57.442162 kubelet[2465]: I0714 21:46:57.441962 2465 state_mem.go:35] "Initializing new in-memory state store" Jul 14 21:46:57.442162 kubelet[2465]: I0714 21:46:57.442039 2465 state_mem.go:75] "Updated machine memory state" Jul 14 21:46:57.445499 kubelet[2465]: E0714 21:46:57.445464 2465 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 14 21:46:57.446001 kubelet[2465]: I0714 21:46:57.445631 2465 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 21:46:57.446001 kubelet[2465]: I0714 21:46:57.445646 2465 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 21:46:57.446217 kubelet[2465]: I0714 21:46:57.445855 2465 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 21:46:57.446493 kubelet[2465]: E0714 21:46:57.446477 2465 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 14 21:46:57.517824 kubelet[2465]: I0714 21:46:57.517702 2465 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 14 21:46:57.517824 kubelet[2465]: I0714 21:46:57.517811 2465 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 14 21:46:57.518045 kubelet[2465]: I0714 21:46:57.517711 2465 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 21:46:57.549773 kubelet[2465]: I0714 21:46:57.549736 2465 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 21:46:57.593477 kubelet[2465]: I0714 21:46:57.593419 2465 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 14 21:46:57.593615 kubelet[2465]: I0714 21:46:57.593514 2465 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 14 21:46:57.602783 kubelet[2465]: I0714 21:46:57.602758 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70c6731afdcc3a5de6f5905affddfe31-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"70c6731afdcc3a5de6f5905affddfe31\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:46:57.602863 kubelet[2465]: I0714 21:46:57.602788 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:46:57.602863 kubelet[2465]: I0714 21:46:57.602808 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:46:57.602863 kubelet[2465]: I0714 21:46:57.602823 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:46:57.602863 kubelet[2465]: I0714 21:46:57.602838 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:46:57.602863 kubelet[2465]: I0714 21:46:57.602853 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 14 21:46:57.602995 kubelet[2465]: I0714 21:46:57.602866 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/70c6731afdcc3a5de6f5905affddfe31-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"70c6731afdcc3a5de6f5905affddfe31\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:46:57.602995 kubelet[2465]: I0714 21:46:57.602880 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:46:57.602995 kubelet[2465]: I0714 21:46:57.602895 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/70c6731afdcc3a5de6f5905affddfe31-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"70c6731afdcc3a5de6f5905affddfe31\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:46:57.824764 kubelet[2465]: E0714 21:46:57.824651 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:57.825703 kubelet[2465]: E0714 21:46:57.825589 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:57.825830 kubelet[2465]: E0714 21:46:57.825807 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:58.378884 kubelet[2465]: I0714 21:46:58.378790 2465 apiserver.go:52] "Watching apiserver" Jul 14 21:46:58.402190 kubelet[2465]: I0714 21:46:58.402105 2465 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 14 21:46:58.432337 kubelet[2465]: E0714 21:46:58.432247 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:58.432496 kubelet[2465]: I0714 21:46:58.432478 2465 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 14 21:46:58.432653 kubelet[2465]: I0714 21:46:58.432640 2465 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 21:46:58.442926 kubelet[2465]: E0714 21:46:58.442882 2465 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 14 21:46:58.443062 kubelet[2465]: E0714 21:46:58.443042 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:58.443273 kubelet[2465]: E0714 21:46:58.442610 2465 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 14 21:46:58.443537 kubelet[2465]: E0714 21:46:58.443393 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:58.462455 kubelet[2465]: I0714 21:46:58.462359 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.462340593 podStartE2EDuration="1.462340593s" podCreationTimestamp="2025-07-14 21:46:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:46:58.452667543 +0000 UTC m=+1.138351704" watchObservedRunningTime="2025-07-14 21:46:58.462340593 +0000 UTC m=+1.148024754" Jul 14 21:46:58.471746 kubelet[2465]: I0714 21:46:58.471683 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.471668576 podStartE2EDuration="1.471668576s" podCreationTimestamp="2025-07-14 21:46:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:46:58.46298167 +0000 UTC m=+1.148665791" watchObservedRunningTime="2025-07-14 21:46:58.471668576 +0000 UTC m=+1.157352737" Jul 14 21:46:58.481816 kubelet[2465]: I0714 21:46:58.481258 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.481242436 podStartE2EDuration="1.481242436s" podCreationTimestamp="2025-07-14 21:46:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:46:58.471811864 +0000 UTC m=+1.157495985" watchObservedRunningTime="2025-07-14 21:46:58.481242436 +0000 UTC m=+1.166926597" Jul 14 21:46:59.433689 kubelet[2465]: E0714 21:46:59.433575 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:59.433689 kubelet[2465]: E0714 21:46:59.433628 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:46:59.433689 kubelet[2465]: E0714 21:46:59.433677 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:47:00.435204 kubelet[2465]: E0714 21:47:00.435175 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:47:02.095696 kubelet[2465]: I0714 21:47:02.095665 2465 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 14 21:47:02.096479 containerd[1439]: time="2025-07-14T21:47:02.096433105Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 14 21:47:02.097431 kubelet[2465]: I0714 21:47:02.096623 2465 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 14 21:47:02.739621 kubelet[2465]: E0714 21:47:02.739539 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:47:03.167784 systemd[1]: Created slice kubepods-besteffort-pod84220344_8d13_4a7e_a43d_d450d6e207ef.slice - libcontainer container kubepods-besteffort-pod84220344_8d13_4a7e_a43d_d450d6e207ef.slice. Jul 14 21:47:03.242151 kubelet[2465]: I0714 21:47:03.242114 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/84220344-8d13-4a7e-a43d-d450d6e207ef-kube-proxy\") pod \"kube-proxy-fvhxq\" (UID: \"84220344-8d13-4a7e-a43d-d450d6e207ef\") " pod="kube-system/kube-proxy-fvhxq" Jul 14 21:47:03.242151 kubelet[2465]: I0714 21:47:03.242150 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84220344-8d13-4a7e-a43d-d450d6e207ef-xtables-lock\") pod \"kube-proxy-fvhxq\" (UID: \"84220344-8d13-4a7e-a43d-d450d6e207ef\") " pod="kube-system/kube-proxy-fvhxq" Jul 14 21:47:03.242548 kubelet[2465]: I0714 21:47:03.242178 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84220344-8d13-4a7e-a43d-d450d6e207ef-lib-modules\") pod \"kube-proxy-fvhxq\" (UID: \"84220344-8d13-4a7e-a43d-d450d6e207ef\") " pod="kube-system/kube-proxy-fvhxq" Jul 14 21:47:03.242548 kubelet[2465]: I0714 21:47:03.242211 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk9g4\" (UniqueName: \"kubernetes.io/projected/84220344-8d13-4a7e-a43d-d450d6e207ef-kube-api-access-nk9g4\") pod \"kube-proxy-fvhxq\" (UID: \"84220344-8d13-4a7e-a43d-d450d6e207ef\") " pod="kube-system/kube-proxy-fvhxq" Jul 14 21:47:03.329097 systemd[1]: Created slice kubepods-besteffort-pod9ca1f576_55ca_4a47_9c44_53fea3654c52.slice - libcontainer container kubepods-besteffort-pod9ca1f576_55ca_4a47_9c44_53fea3654c52.slice. Jul 14 21:47:03.342996 kubelet[2465]: I0714 21:47:03.342922 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9ca1f576-55ca-4a47-9c44-53fea3654c52-var-lib-calico\") pod \"tigera-operator-747864d56d-798mx\" (UID: \"9ca1f576-55ca-4a47-9c44-53fea3654c52\") " pod="tigera-operator/tigera-operator-747864d56d-798mx" Jul 14 21:47:03.342996 kubelet[2465]: I0714 21:47:03.342965 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzwqq\" (UniqueName: \"kubernetes.io/projected/9ca1f576-55ca-4a47-9c44-53fea3654c52-kube-api-access-xzwqq\") pod \"tigera-operator-747864d56d-798mx\" (UID: \"9ca1f576-55ca-4a47-9c44-53fea3654c52\") " pod="tigera-operator/tigera-operator-747864d56d-798mx" Jul 14 21:47:03.439562 kubelet[2465]: E0714 21:47:03.439450 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:47:03.485382 kubelet[2465]: E0714 21:47:03.485321 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:47:03.485961 containerd[1439]: time="2025-07-14T21:47:03.485923500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fvhxq,Uid:84220344-8d13-4a7e-a43d-d450d6e207ef,Namespace:kube-system,Attempt:0,}" Jul 14 21:47:03.591899 containerd[1439]: time="2025-07-14T21:47:03.591781893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:47:03.591899 containerd[1439]: time="2025-07-14T21:47:03.591864934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:47:03.591899 containerd[1439]: time="2025-07-14T21:47:03.591889374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:47:03.592127 containerd[1439]: time="2025-07-14T21:47:03.591982534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:47:03.612630 systemd[1]: Started cri-containerd-f5ba0ac42f283b11568bc0e25b22bab825f348c02adc95b3f349ec9789c63d12.scope - libcontainer container f5ba0ac42f283b11568bc0e25b22bab825f348c02adc95b3f349ec9789c63d12. Jul 14 21:47:03.629237 containerd[1439]: time="2025-07-14T21:47:03.629077424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fvhxq,Uid:84220344-8d13-4a7e-a43d-d450d6e207ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5ba0ac42f283b11568bc0e25b22bab825f348c02adc95b3f349ec9789c63d12\"" Jul 14 21:47:03.630127 kubelet[2465]: E0714 21:47:03.630104 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:47:03.633469 containerd[1439]: time="2025-07-14T21:47:03.633152692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-798mx,Uid:9ca1f576-55ca-4a47-9c44-53fea3654c52,Namespace:tigera-operator,Attempt:0,}" Jul 14 21:47:03.634296 containerd[1439]: time="2025-07-14T21:47:03.634259699Z" level=info msg="CreateContainer within sandbox \"f5ba0ac42f283b11568bc0e25b22bab825f348c02adc95b3f349ec9789c63d12\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 14 21:47:03.652688 containerd[1439]: time="2025-07-14T21:47:03.652629823Z" level=info msg="CreateContainer within sandbox \"f5ba0ac42f283b11568bc0e25b22bab825f348c02adc95b3f349ec9789c63d12\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5e10935034bd9fae702a004296f9ca42dbbc531fa8d8820c01e91480addb5cdd\"" Jul 14 21:47:03.654714 containerd[1439]: time="2025-07-14T21:47:03.653649310Z" level=info msg="StartContainer for \"5e10935034bd9fae702a004296f9ca42dbbc531fa8d8820c01e91480addb5cdd\"" Jul 14 21:47:03.657075 containerd[1439]: time="2025-07-14T21:47:03.656925892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:47:03.657075 containerd[1439]: time="2025-07-14T21:47:03.657013653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:47:03.657509 containerd[1439]: time="2025-07-14T21:47:03.657034573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:47:03.657509 containerd[1439]: time="2025-07-14T21:47:03.657316055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:47:03.675669 systemd[1]: Started cri-containerd-055e06ab112377a7026a5973246273e1ee8fc49c0295e1c657e4a47c840540eb.scope - libcontainer container 055e06ab112377a7026a5973246273e1ee8fc49c0295e1c657e4a47c840540eb. Jul 14 21:47:03.678797 systemd[1]: Started cri-containerd-5e10935034bd9fae702a004296f9ca42dbbc531fa8d8820c01e91480addb5cdd.scope - libcontainer container 5e10935034bd9fae702a004296f9ca42dbbc531fa8d8820c01e91480addb5cdd. Jul 14 21:47:03.718124 containerd[1439]: time="2025-07-14T21:47:03.718005824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-798mx,Uid:9ca1f576-55ca-4a47-9c44-53fea3654c52,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"055e06ab112377a7026a5973246273e1ee8fc49c0295e1c657e4a47c840540eb\"" Jul 14 21:47:03.718124 containerd[1439]: time="2025-07-14T21:47:03.718048064Z" level=info msg="StartContainer for \"5e10935034bd9fae702a004296f9ca42dbbc531fa8d8820c01e91480addb5cdd\" returns successfully" Jul 14 21:47:03.723695 containerd[1439]: time="2025-07-14T21:47:03.720340960Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 14 21:47:04.379586 kubelet[2465]: E0714 21:47:04.379263 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:47:04.442404 kubelet[2465]: E0714 21:47:04.442301 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:47:04.443450 kubelet[2465]: E0714 21:47:04.443404 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:47:04.443768 kubelet[2465]: E0714 21:47:04.443738 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:47:04.533448 kubelet[2465]: I0714 21:47:04.533384 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fvhxq" podStartSLOduration=1.533364127 podStartE2EDuration="1.533364127s" podCreationTimestamp="2025-07-14 21:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:47:04.518546112 +0000 UTC m=+7.204230313" watchObservedRunningTime="2025-07-14 21:47:04.533364127 +0000 UTC m=+7.219048248" Jul 14 21:47:05.007057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1304278168.mount: Deactivated successfully. Jul 14 21:47:05.339572 containerd[1439]: time="2025-07-14T21:47:05.339523073Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:05.340518 containerd[1439]: time="2025-07-14T21:47:05.340195357Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 14 21:47:05.340993 containerd[1439]: time="2025-07-14T21:47:05.340961522Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:05.343377 containerd[1439]: time="2025-07-14T21:47:05.343346216Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:05.344314 containerd[1439]: time="2025-07-14T21:47:05.344278942Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.623904942s" Jul 14 21:47:05.344391 containerd[1439]: time="2025-07-14T21:47:05.344323462Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 14 21:47:05.349861 containerd[1439]: time="2025-07-14T21:47:05.349821095Z" level=info msg="CreateContainer within sandbox \"055e06ab112377a7026a5973246273e1ee8fc49c0295e1c657e4a47c840540eb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 14 21:47:05.359257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3983177691.mount: Deactivated successfully. Jul 14 21:47:05.361595 containerd[1439]: time="2025-07-14T21:47:05.361548046Z" level=info msg="CreateContainer within sandbox \"055e06ab112377a7026a5973246273e1ee8fc49c0295e1c657e4a47c840540eb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f760917b669419c3ec0ac91193cdd3c27fd2cabd1c4ce7336942f9d1ccab0600\"" Jul 14 21:47:05.362167 containerd[1439]: time="2025-07-14T21:47:05.362059089Z" level=info msg="StartContainer for \"f760917b669419c3ec0ac91193cdd3c27fd2cabd1c4ce7336942f9d1ccab0600\"" Jul 14 21:47:05.388630 systemd[1]: Started cri-containerd-f760917b669419c3ec0ac91193cdd3c27fd2cabd1c4ce7336942f9d1ccab0600.scope - libcontainer container f760917b669419c3ec0ac91193cdd3c27fd2cabd1c4ce7336942f9d1ccab0600. Jul 14 21:47:05.445379 containerd[1439]: time="2025-07-14T21:47:05.445218551Z" level=info msg="StartContainer for \"f760917b669419c3ec0ac91193cdd3c27fd2cabd1c4ce7336942f9d1ccab0600\" returns successfully" Jul 14 21:47:05.447812 kubelet[2465]: E0714 21:47:05.447465 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:47:07.527534 systemd[1]: cri-containerd-f760917b669419c3ec0ac91193cdd3c27fd2cabd1c4ce7336942f9d1ccab0600.scope: Deactivated successfully. Jul 14 21:47:07.559857 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f760917b669419c3ec0ac91193cdd3c27fd2cabd1c4ce7336942f9d1ccab0600-rootfs.mount: Deactivated successfully. Jul 14 21:47:07.593455 containerd[1439]: time="2025-07-14T21:47:07.588313169Z" level=info msg="shim disconnected" id=f760917b669419c3ec0ac91193cdd3c27fd2cabd1c4ce7336942f9d1ccab0600 namespace=k8s.io Jul 14 21:47:07.593455 containerd[1439]: time="2025-07-14T21:47:07.592417711Z" level=warning msg="cleaning up after shim disconnected" id=f760917b669419c3ec0ac91193cdd3c27fd2cabd1c4ce7336942f9d1ccab0600 namespace=k8s.io Jul 14 21:47:07.593455 containerd[1439]: time="2025-07-14T21:47:07.592430832Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:47:07.616624 containerd[1439]: time="2025-07-14T21:47:07.616569002Z" level=warning msg="cleanup warnings time=\"2025-07-14T21:47:07Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 14 21:47:08.458540 kubelet[2465]: I0714 21:47:08.458477 2465 scope.go:117] "RemoveContainer" containerID="f760917b669419c3ec0ac91193cdd3c27fd2cabd1c4ce7336942f9d1ccab0600" Jul 14 21:47:08.467876 containerd[1439]: time="2025-07-14T21:47:08.467814325Z" level=info msg="CreateContainer within sandbox \"055e06ab112377a7026a5973246273e1ee8fc49c0295e1c657e4a47c840540eb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 14 21:47:08.510075 containerd[1439]: time="2025-07-14T21:47:08.510014062Z" level=info msg="CreateContainer within sandbox \"055e06ab112377a7026a5973246273e1ee8fc49c0295e1c657e4a47c840540eb\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"a1d44de3a1d05f2f725ad1a6417ea13f65e7250e4141c44f9058409d461be189\"" Jul 14 21:47:08.510900 containerd[1439]: time="2025-07-14T21:47:08.510869667Z" level=info msg="StartContainer for \"a1d44de3a1d05f2f725ad1a6417ea13f65e7250e4141c44f9058409d461be189\"" Jul 14 21:47:08.538641 systemd[1]: Started cri-containerd-a1d44de3a1d05f2f725ad1a6417ea13f65e7250e4141c44f9058409d461be189.scope - libcontainer container a1d44de3a1d05f2f725ad1a6417ea13f65e7250e4141c44f9058409d461be189. Jul 14 21:47:08.559498 containerd[1439]: time="2025-07-14T21:47:08.559366236Z" level=info msg="StartContainer for \"a1d44de3a1d05f2f725ad1a6417ea13f65e7250e4141c44f9058409d461be189\" returns successfully" Jul 14 21:47:09.213153 kubelet[2465]: E0714 21:47:09.213073 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:47:09.473877 kubelet[2465]: I0714 21:47:09.472999 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-798mx" podStartSLOduration=4.846138288 podStartE2EDuration="6.472983128s" podCreationTimestamp="2025-07-14 21:47:03 +0000 UTC" firstStartedPulling="2025-07-14 21:47:03.719996357 +0000 UTC m=+6.405680518" lastFinishedPulling="2025-07-14 21:47:05.346841197 +0000 UTC m=+8.032525358" observedRunningTime="2025-07-14 21:47:05.461767251 +0000 UTC m=+8.147451412" watchObservedRunningTime="2025-07-14 21:47:09.472983128 +0000 UTC m=+12.158667289" Jul 14 21:47:10.619762 sudo[1618]: pam_unix(sudo:session): session closed for user root Jul 14 21:47:10.623559 sshd[1615]: pam_unix(sshd:session): session closed for user core Jul 14 21:47:10.626423 systemd[1]: sshd@6-10.0.0.35:22-10.0.0.1:33048.service: Deactivated successfully. Jul 14 21:47:10.628490 systemd[1]: session-7.scope: Deactivated successfully. Jul 14 21:47:10.629522 systemd[1]: session-7.scope: Consumed 7.056s CPU time, 155.1M memory peak, 0B memory swap peak. Jul 14 21:47:10.630546 systemd-logind[1419]: Session 7 logged out. Waiting for processes to exit. Jul 14 21:47:10.631788 systemd-logind[1419]: Removed session 7. Jul 14 21:47:11.116237 update_engine[1423]: I20250714 21:47:11.116165 1423 update_attempter.cc:509] Updating boot flags... Jul 14 21:47:11.152459 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2946) Jul 14 21:47:11.202119 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2948) Jul 14 21:47:11.230470 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2948) Jul 14 21:47:16.649753 systemd[1]: Created slice kubepods-besteffort-pod83b88aa7_518b_41c8_95b5_b40ccf06d082.slice - libcontainer container kubepods-besteffort-pod83b88aa7_518b_41c8_95b5_b40ccf06d082.slice. Jul 14 21:47:16.733020 kubelet[2465]: I0714 21:47:16.732251 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/83b88aa7-518b-41c8-95b5-b40ccf06d082-typha-certs\") pod \"calico-typha-5897c58fd7-9zxhr\" (UID: \"83b88aa7-518b-41c8-95b5-b40ccf06d082\") " pod="calico-system/calico-typha-5897c58fd7-9zxhr" Jul 14 21:47:16.733020 kubelet[2465]: I0714 21:47:16.732295 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgtx5\" (UniqueName: \"kubernetes.io/projected/83b88aa7-518b-41c8-95b5-b40ccf06d082-kube-api-access-sgtx5\") pod \"calico-typha-5897c58fd7-9zxhr\" (UID: \"83b88aa7-518b-41c8-95b5-b40ccf06d082\") " pod="calico-system/calico-typha-5897c58fd7-9zxhr" Jul 14 21:47:16.733020 kubelet[2465]: I0714 21:47:16.732317 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83b88aa7-518b-41c8-95b5-b40ccf06d082-tigera-ca-bundle\") pod \"calico-typha-5897c58fd7-9zxhr\" (UID: \"83b88aa7-518b-41c8-95b5-b40ccf06d082\") " pod="calico-system/calico-typha-5897c58fd7-9zxhr" Jul 14 21:47:16.749667 systemd[1]: Created slice kubepods-besteffort-podb3753e5d_dd3a_47c4_942e_91e23e745a86.slice - libcontainer container kubepods-besteffort-podb3753e5d_dd3a_47c4_942e_91e23e745a86.slice. Jul 14 21:47:16.833645 kubelet[2465]: I0714 21:47:16.833287 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b3753e5d-dd3a-47c4-942e-91e23e745a86-cni-log-dir\") pod \"calico-node-6lwml\" (UID: \"b3753e5d-dd3a-47c4-942e-91e23e745a86\") " pod="calico-system/calico-node-6lwml" Jul 14 21:47:16.833645 kubelet[2465]: I0714 21:47:16.833336 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b3753e5d-dd3a-47c4-942e-91e23e745a86-var-lib-calico\") pod \"calico-node-6lwml\" (UID: \"b3753e5d-dd3a-47c4-942e-91e23e745a86\") " pod="calico-system/calico-node-6lwml" Jul 14 21:47:16.833645 kubelet[2465]: I0714 21:47:16.833398 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b3753e5d-dd3a-47c4-942e-91e23e745a86-cni-net-dir\") pod \"calico-node-6lwml\" (UID: \"b3753e5d-dd3a-47c4-942e-91e23e745a86\") " pod="calico-system/calico-node-6lwml" Jul 14 21:47:16.833645 kubelet[2465]: I0714 21:47:16.833432 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b3753e5d-dd3a-47c4-942e-91e23e745a86-node-certs\") pod \"calico-node-6lwml\" (UID: \"b3753e5d-dd3a-47c4-942e-91e23e745a86\") " pod="calico-system/calico-node-6lwml" Jul 14 21:47:16.833645 kubelet[2465]: I0714 21:47:16.833464 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3753e5d-dd3a-47c4-942e-91e23e745a86-tigera-ca-bundle\") pod \"calico-node-6lwml\" (UID: \"b3753e5d-dd3a-47c4-942e-91e23e745a86\") " pod="calico-system/calico-node-6lwml" Jul 14 21:47:16.833876 kubelet[2465]: I0714 21:47:16.833484 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b3753e5d-dd3a-47c4-942e-91e23e745a86-flexvol-driver-host\") pod \"calico-node-6lwml\" (UID: \"b3753e5d-dd3a-47c4-942e-91e23e745a86\") " pod="calico-system/calico-node-6lwml" Jul 14 21:47:16.833876 kubelet[2465]: I0714 21:47:16.833514 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3753e5d-dd3a-47c4-942e-91e23e745a86-xtables-lock\") pod \"calico-node-6lwml\" (UID: \"b3753e5d-dd3a-47c4-942e-91e23e745a86\") " pod="calico-system/calico-node-6lwml" Jul 14 21:47:16.833876 kubelet[2465]: I0714 21:47:16.833527 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b3753e5d-dd3a-47c4-942e-91e23e745a86-var-run-calico\") pod \"calico-node-6lwml\" (UID: \"b3753e5d-dd3a-47c4-942e-91e23e745a86\") " pod="calico-system/calico-node-6lwml" Jul 14 21:47:16.833876 kubelet[2465]: I0714 21:47:16.833555 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b3753e5d-dd3a-47c4-942e-91e23e745a86-cni-bin-dir\") pod \"calico-node-6lwml\" (UID: \"b3753e5d-dd3a-47c4-942e-91e23e745a86\") " pod="calico-system/calico-node-6lwml" Jul 14 21:47:16.833876 kubelet[2465]: I0714 21:47:16.833569 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3753e5d-dd3a-47c4-942e-91e23e745a86-lib-modules\") pod \"calico-node-6lwml\" (UID: \"b3753e5d-dd3a-47c4-942e-91e23e745a86\") " pod="calico-system/calico-node-6lwml" Jul 14 21:47:16.834275 kubelet[2465]: I0714 21:47:16.833583 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nflbd\" (UniqueName: \"kubernetes.io/projected/b3753e5d-dd3a-47c4-942e-91e23e745a86-kube-api-access-nflbd\") pod \"calico-node-6lwml\" (UID: \"b3753e5d-dd3a-47c4-942e-91e23e745a86\") " pod="calico-system/calico-node-6lwml" Jul 14 21:47:16.834275 kubelet[2465]: I0714 21:47:16.833597 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b3753e5d-dd3a-47c4-942e-91e23e745a86-policysync\") pod \"calico-node-6lwml\" (UID: \"b3753e5d-dd3a-47c4-942e-91e23e745a86\") " pod="calico-system/calico-node-6lwml" Jul 14 21:47:16.928424 kubelet[2465]: E0714 21:47:16.927429 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bcc6m" podUID="2c011bf7-d865-42c4-a2c0-d53c4ee5f22f" Jul 14 21:47:16.953493 kubelet[2465]: E0714 21:47:16.953417 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:47:16.954406 containerd[1439]: time="2025-07-14T21:47:16.954024750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5897c58fd7-9zxhr,Uid:83b88aa7-518b-41c8-95b5-b40ccf06d082,Namespace:calico-system,Attempt:0,}" Jul 14 21:47:16.956529 kubelet[2465]: E0714 21:47:16.956500 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:16.956529 kubelet[2465]: W0714 21:47:16.956518 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:16.960284 kubelet[2465]: E0714 21:47:16.960249 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:16.979521 containerd[1439]: time="2025-07-14T21:47:16.978765795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:47:16.979521 containerd[1439]: time="2025-07-14T21:47:16.978853475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:47:16.979521 containerd[1439]: time="2025-07-14T21:47:16.978881315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:47:16.979521 containerd[1439]: time="2025-07-14T21:47:16.979004116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:47:16.998919 systemd[1]: Started cri-containerd-5d815c94cf874bec9233376b22a4b906a81e5191ffa0670597536ae6a2639557.scope - libcontainer container 5d815c94cf874bec9233376b22a4b906a81e5191ffa0670597536ae6a2639557. Jul 14 21:47:17.031534 containerd[1439]: time="2025-07-14T21:47:17.031488972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5897c58fd7-9zxhr,Uid:83b88aa7-518b-41c8-95b5-b40ccf06d082,Namespace:calico-system,Attempt:0,} returns sandbox id \"5d815c94cf874bec9233376b22a4b906a81e5191ffa0670597536ae6a2639557\"" Jul 14 21:47:17.032392 kubelet[2465]: E0714 21:47:17.032362 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:47:17.033497 containerd[1439]: time="2025-07-14T21:47:17.033470458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 14 21:47:17.035798 kubelet[2465]: E0714 21:47:17.035778 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.035980 kubelet[2465]: W0714 21:47:17.035912 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.035980 kubelet[2465]: E0714 21:47:17.035937 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.036223 kubelet[2465]: I0714 21:47:17.036148 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2c011bf7-d865-42c4-a2c0-d53c4ee5f22f-registration-dir\") pod \"csi-node-driver-bcc6m\" (UID: \"2c011bf7-d865-42c4-a2c0-d53c4ee5f22f\") " pod="calico-system/csi-node-driver-bcc6m" Jul 14 21:47:17.036593 kubelet[2465]: E0714 21:47:17.036495 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.036593 kubelet[2465]: W0714 21:47:17.036513 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.036593 kubelet[2465]: E0714 21:47:17.036524 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.036593 kubelet[2465]: I0714 21:47:17.036566 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2c011bf7-d865-42c4-a2c0-d53c4ee5f22f-socket-dir\") pod \"csi-node-driver-bcc6m\" (UID: \"2c011bf7-d865-42c4-a2c0-d53c4ee5f22f\") " pod="calico-system/csi-node-driver-bcc6m" Jul 14 21:47:17.049750 kubelet[2465]: E0714 21:47:17.049712 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.049750 kubelet[2465]: W0714 21:47:17.049735 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.049750 kubelet[2465]: E0714 21:47:17.049754 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.050314 kubelet[2465]: E0714 21:47:17.050263 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.050314 kubelet[2465]: W0714 21:47:17.050292 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.050314 kubelet[2465]: E0714 21:47:17.050305 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.050940 kubelet[2465]: E0714 21:47:17.050915 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.050940 kubelet[2465]: W0714 21:47:17.050930 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.050940 kubelet[2465]: E0714 21:47:17.050943 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.051173 kubelet[2465]: I0714 21:47:17.051055 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8zz2\" (UniqueName: \"kubernetes.io/projected/2c011bf7-d865-42c4-a2c0-d53c4ee5f22f-kube-api-access-q8zz2\") pod \"csi-node-driver-bcc6m\" (UID: \"2c011bf7-d865-42c4-a2c0-d53c4ee5f22f\") " pod="calico-system/csi-node-driver-bcc6m" Jul 14 21:47:17.051207 kubelet[2465]: E0714 21:47:17.051177 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.051207 kubelet[2465]: W0714 21:47:17.051187 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.051207 kubelet[2465]: E0714 21:47:17.051196 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.051359 kubelet[2465]: E0714 21:47:17.051344 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.051359 kubelet[2465]: W0714 21:47:17.051356 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.051414 kubelet[2465]: E0714 21:47:17.051365 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.051553 kubelet[2465]: E0714 21:47:17.051539 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.051553 kubelet[2465]: W0714 21:47:17.051552 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.051622 kubelet[2465]: E0714 21:47:17.051561 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.053284 kubelet[2465]: I0714 21:47:17.052398 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2c011bf7-d865-42c4-a2c0-d53c4ee5f22f-kubelet-dir\") pod \"csi-node-driver-bcc6m\" (UID: \"2c011bf7-d865-42c4-a2c0-d53c4ee5f22f\") " pod="calico-system/csi-node-driver-bcc6m" Jul 14 21:47:17.054109 containerd[1439]: time="2025-07-14T21:47:17.053752165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6lwml,Uid:b3753e5d-dd3a-47c4-942e-91e23e745a86,Namespace:calico-system,Attempt:0,}" Jul 14 21:47:17.054621 kubelet[2465]: E0714 21:47:17.054592 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.054621 kubelet[2465]: W0714 21:47:17.054614 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.054621 kubelet[2465]: E0714 21:47:17.054628 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.055740 kubelet[2465]: E0714 21:47:17.055678 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.055740 kubelet[2465]: W0714 21:47:17.055730 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.055740 kubelet[2465]: E0714 21:47:17.055743 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.056020 kubelet[2465]: E0714 21:47:17.056002 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.056020 kubelet[2465]: W0714 21:47:17.056018 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.056083 kubelet[2465]: E0714 21:47:17.056028 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.056156 kubelet[2465]: I0714 21:47:17.056137 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2c011bf7-d865-42c4-a2c0-d53c4ee5f22f-varrun\") pod \"csi-node-driver-bcc6m\" (UID: \"2c011bf7-d865-42c4-a2c0-d53c4ee5f22f\") " pod="calico-system/csi-node-driver-bcc6m" Jul 14 21:47:17.056462 kubelet[2465]: E0714 21:47:17.056399 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.056462 kubelet[2465]: W0714 21:47:17.056415 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.056462 kubelet[2465]: E0714 21:47:17.056426 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.056838 kubelet[2465]: E0714 21:47:17.056812 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.056838 kubelet[2465]: W0714 21:47:17.056828 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.056838 kubelet[2465]: E0714 21:47:17.056840 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.057645 kubelet[2465]: E0714 21:47:17.057628 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.057645 kubelet[2465]: W0714 21:47:17.057644 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.057958 kubelet[2465]: E0714 21:47:17.057655 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.057958 kubelet[2465]: E0714 21:47:17.057822 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.057958 kubelet[2465]: W0714 21:47:17.057828 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.057958 kubelet[2465]: E0714 21:47:17.057836 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.087778 containerd[1439]: time="2025-07-14T21:47:17.087694157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:47:17.087778 containerd[1439]: time="2025-07-14T21:47:17.087753757Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:47:17.087778 containerd[1439]: time="2025-07-14T21:47:17.087765277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:47:17.087970 containerd[1439]: time="2025-07-14T21:47:17.087846717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:47:17.112611 systemd[1]: Started cri-containerd-db1bc7272309c6d75805b37ce95306878e66f1e7e0223ce583c85798dd017e5a.scope - libcontainer container db1bc7272309c6d75805b37ce95306878e66f1e7e0223ce583c85798dd017e5a. Jul 14 21:47:17.138239 containerd[1439]: time="2025-07-14T21:47:17.138195483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6lwml,Uid:b3753e5d-dd3a-47c4-942e-91e23e745a86,Namespace:calico-system,Attempt:0,} returns sandbox id \"db1bc7272309c6d75805b37ce95306878e66f1e7e0223ce583c85798dd017e5a\"" Jul 14 21:47:17.156781 kubelet[2465]: E0714 21:47:17.156625 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.156781 kubelet[2465]: W0714 21:47:17.156649 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.156781 kubelet[2465]: E0714 21:47:17.156671 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.157086 kubelet[2465]: E0714 21:47:17.156914 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.157234 kubelet[2465]: W0714 21:47:17.156925 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.157234 kubelet[2465]: E0714 21:47:17.157155 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.157658 kubelet[2465]: E0714 21:47:17.157534 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.157658 kubelet[2465]: W0714 21:47:17.157548 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.157658 kubelet[2465]: E0714 21:47:17.157559 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.157967 kubelet[2465]: E0714 21:47:17.157855 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.157967 kubelet[2465]: W0714 21:47:17.157885 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.157967 kubelet[2465]: E0714 21:47:17.157896 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.158361 kubelet[2465]: E0714 21:47:17.158257 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.158361 kubelet[2465]: W0714 21:47:17.158271 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.158361 kubelet[2465]: E0714 21:47:17.158282 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.158724 kubelet[2465]: E0714 21:47:17.158685 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.158724 kubelet[2465]: W0714 21:47:17.158700 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.158724 kubelet[2465]: E0714 21:47:17.158711 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.159126 kubelet[2465]: E0714 21:47:17.159052 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.159126 kubelet[2465]: W0714 21:47:17.159066 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.159126 kubelet[2465]: E0714 21:47:17.159075 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.159519 kubelet[2465]: E0714 21:47:17.159421 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.159519 kubelet[2465]: W0714 21:47:17.159447 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.159519 kubelet[2465]: E0714 21:47:17.159460 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.159869 kubelet[2465]: E0714 21:47:17.159797 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.159869 kubelet[2465]: W0714 21:47:17.159809 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.159869 kubelet[2465]: E0714 21:47:17.159818 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.160245 kubelet[2465]: E0714 21:47:17.160141 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.160245 kubelet[2465]: W0714 21:47:17.160155 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.160245 kubelet[2465]: E0714 21:47:17.160166 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.160510 kubelet[2465]: E0714 21:47:17.160417 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.160510 kubelet[2465]: W0714 21:47:17.160428 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.160510 kubelet[2465]: E0714 21:47:17.160464 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.160845 kubelet[2465]: E0714 21:47:17.160778 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.160845 kubelet[2465]: W0714 21:47:17.160791 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.160845 kubelet[2465]: E0714 21:47:17.160801 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.161188 kubelet[2465]: E0714 21:47:17.161120 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.161188 kubelet[2465]: W0714 21:47:17.161134 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.161188 kubelet[2465]: E0714 21:47:17.161144 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.161697 kubelet[2465]: E0714 21:47:17.161589 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.161697 kubelet[2465]: W0714 21:47:17.161603 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.161697 kubelet[2465]: E0714 21:47:17.161617 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.162024 kubelet[2465]: E0714 21:47:17.161942 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.162024 kubelet[2465]: W0714 21:47:17.161953 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.162024 kubelet[2465]: E0714 21:47:17.161963 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.162368 kubelet[2465]: E0714 21:47:17.162301 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.162368 kubelet[2465]: W0714 21:47:17.162313 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.162368 kubelet[2465]: E0714 21:47:17.162323 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.162786 kubelet[2465]: E0714 21:47:17.162716 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.162786 kubelet[2465]: W0714 21:47:17.162729 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.162786 kubelet[2465]: E0714 21:47:17.162741 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.163184 kubelet[2465]: E0714 21:47:17.163085 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.163184 kubelet[2465]: W0714 21:47:17.163098 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.163184 kubelet[2465]: E0714 21:47:17.163108 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.163391 kubelet[2465]: E0714 21:47:17.163359 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.163391 kubelet[2465]: W0714 21:47:17.163369 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.163391 kubelet[2465]: E0714 21:47:17.163379 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.163840 kubelet[2465]: E0714 21:47:17.163746 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.163840 kubelet[2465]: W0714 21:47:17.163761 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.163840 kubelet[2465]: E0714 21:47:17.163771 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.164414 kubelet[2465]: E0714 21:47:17.164266 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.164414 kubelet[2465]: W0714 21:47:17.164278 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.164414 kubelet[2465]: E0714 21:47:17.164289 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.165021 kubelet[2465]: E0714 21:47:17.164858 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.165021 kubelet[2465]: W0714 21:47:17.164870 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.165021 kubelet[2465]: E0714 21:47:17.164892 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.165233 kubelet[2465]: E0714 21:47:17.165220 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.165409 kubelet[2465]: W0714 21:47:17.165280 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.165409 kubelet[2465]: E0714 21:47:17.165296 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.165618 kubelet[2465]: E0714 21:47:17.165606 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.165676 kubelet[2465]: W0714 21:47:17.165664 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.165803 kubelet[2465]: E0714 21:47:17.165734 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.166039 kubelet[2465]: E0714 21:47:17.166027 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.166151 kubelet[2465]: W0714 21:47:17.166113 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.166151 kubelet[2465]: E0714 21:47:17.166130 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:17.175694 kubelet[2465]: E0714 21:47:17.175675 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:17.175822 kubelet[2465]: W0714 21:47:17.175776 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:17.175822 kubelet[2465]: E0714 21:47:17.175796 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:18.165534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1148257998.mount: Deactivated successfully. Jul 14 21:47:18.417012 kubelet[2465]: E0714 21:47:18.416884 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bcc6m" podUID="2c011bf7-d865-42c4-a2c0-d53c4ee5f22f" Jul 14 21:47:18.753757 containerd[1439]: time="2025-07-14T21:47:18.753638206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:18.754615 containerd[1439]: time="2025-07-14T21:47:18.754504328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 14 21:47:18.755412 containerd[1439]: time="2025-07-14T21:47:18.755196571Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:18.757308 containerd[1439]: time="2025-07-14T21:47:18.757239377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:18.758001 containerd[1439]: time="2025-07-14T21:47:18.757964579Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.724455201s" Jul 14 21:47:18.758001 containerd[1439]: time="2025-07-14T21:47:18.758000419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 14 21:47:18.760400 containerd[1439]: time="2025-07-14T21:47:18.760368587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 14 21:47:18.773246 containerd[1439]: time="2025-07-14T21:47:18.773102587Z" level=info msg="CreateContainer within sandbox \"5d815c94cf874bec9233376b22a4b906a81e5191ffa0670597536ae6a2639557\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 14 21:47:18.782175 containerd[1439]: time="2025-07-14T21:47:18.782133455Z" level=info msg="CreateContainer within sandbox \"5d815c94cf874bec9233376b22a4b906a81e5191ffa0670597536ae6a2639557\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1fe973fb1b53847305c91a47171228fe0a16bac628f6f6236c11a28dd3091c2e\"" Jul 14 21:47:18.783275 containerd[1439]: time="2025-07-14T21:47:18.782545897Z" level=info msg="StartContainer for \"1fe973fb1b53847305c91a47171228fe0a16bac628f6f6236c11a28dd3091c2e\"" Jul 14 21:47:18.808636 systemd[1]: Started cri-containerd-1fe973fb1b53847305c91a47171228fe0a16bac628f6f6236c11a28dd3091c2e.scope - libcontainer container 1fe973fb1b53847305c91a47171228fe0a16bac628f6f6236c11a28dd3091c2e. Jul 14 21:47:18.886540 containerd[1439]: time="2025-07-14T21:47:18.886489023Z" level=info msg="StartContainer for \"1fe973fb1b53847305c91a47171228fe0a16bac628f6f6236c11a28dd3091c2e\" returns successfully" Jul 14 21:47:19.483966 kubelet[2465]: E0714 21:47:19.483876 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:47:19.495631 kubelet[2465]: I0714 21:47:19.495569 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5897c58fd7-9zxhr" podStartSLOduration=1.7686114210000001 podStartE2EDuration="3.495554669s" podCreationTimestamp="2025-07-14 21:47:16 +0000 UTC" firstStartedPulling="2025-07-14 21:47:17.033220298 +0000 UTC m=+19.718904419" lastFinishedPulling="2025-07-14 21:47:18.760163506 +0000 UTC m=+21.445847667" observedRunningTime="2025-07-14 21:47:19.495202668 +0000 UTC m=+22.180886789" watchObservedRunningTime="2025-07-14 21:47:19.495554669 +0000 UTC m=+22.181238790" Jul 14 21:47:19.563869 kubelet[2465]: E0714 21:47:19.563837 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.563869 kubelet[2465]: W0714 21:47:19.563863 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.564068 kubelet[2465]: E0714 21:47:19.563884 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.564115 kubelet[2465]: E0714 21:47:19.564097 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.564143 kubelet[2465]: W0714 21:47:19.564106 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.564197 kubelet[2465]: E0714 21:47:19.564145 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.564333 kubelet[2465]: E0714 21:47:19.564321 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.564373 kubelet[2465]: W0714 21:47:19.564333 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.564373 kubelet[2465]: E0714 21:47:19.564342 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.564518 kubelet[2465]: E0714 21:47:19.564508 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.564561 kubelet[2465]: W0714 21:47:19.564520 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.564561 kubelet[2465]: E0714 21:47:19.564528 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.564719 kubelet[2465]: E0714 21:47:19.564703 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.564719 kubelet[2465]: W0714 21:47:19.564713 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.564797 kubelet[2465]: E0714 21:47:19.564721 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.564871 kubelet[2465]: E0714 21:47:19.564861 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.564902 kubelet[2465]: W0714 21:47:19.564871 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.564902 kubelet[2465]: E0714 21:47:19.564879 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.565017 kubelet[2465]: E0714 21:47:19.565008 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.565017 kubelet[2465]: W0714 21:47:19.565017 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.565098 kubelet[2465]: E0714 21:47:19.565025 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.565177 kubelet[2465]: E0714 21:47:19.565166 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.565177 kubelet[2465]: W0714 21:47:19.565177 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.565250 kubelet[2465]: E0714 21:47:19.565184 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.565331 kubelet[2465]: E0714 21:47:19.565320 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.565367 kubelet[2465]: W0714 21:47:19.565331 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.565367 kubelet[2465]: E0714 21:47:19.565338 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.565482 kubelet[2465]: E0714 21:47:19.565473 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.565482 kubelet[2465]: W0714 21:47:19.565481 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.565643 kubelet[2465]: E0714 21:47:19.565489 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.565643 kubelet[2465]: E0714 21:47:19.565628 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.565643 kubelet[2465]: W0714 21:47:19.565635 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.565643 kubelet[2465]: E0714 21:47:19.565641 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.565783 kubelet[2465]: E0714 21:47:19.565773 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.565783 kubelet[2465]: W0714 21:47:19.565783 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.565866 kubelet[2465]: E0714 21:47:19.565790 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.565934 kubelet[2465]: E0714 21:47:19.565924 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.565934 kubelet[2465]: W0714 21:47:19.565933 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.566000 kubelet[2465]: E0714 21:47:19.565941 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.566082 kubelet[2465]: E0714 21:47:19.566071 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.566082 kubelet[2465]: W0714 21:47:19.566081 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.566165 kubelet[2465]: E0714 21:47:19.566089 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.566225 kubelet[2465]: E0714 21:47:19.566215 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.566253 kubelet[2465]: W0714 21:47:19.566226 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.566253 kubelet[2465]: E0714 21:47:19.566233 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.573737 kubelet[2465]: E0714 21:47:19.573642 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.573737 kubelet[2465]: W0714 21:47:19.573664 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.573737 kubelet[2465]: E0714 21:47:19.573680 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.573975 kubelet[2465]: E0714 21:47:19.573947 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.573975 kubelet[2465]: W0714 21:47:19.573960 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.573975 kubelet[2465]: E0714 21:47:19.573969 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.574216 kubelet[2465]: E0714 21:47:19.574198 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.574253 kubelet[2465]: W0714 21:47:19.574216 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.574253 kubelet[2465]: E0714 21:47:19.574229 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.574568 kubelet[2465]: E0714 21:47:19.574516 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.574615 kubelet[2465]: W0714 21:47:19.574569 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.574615 kubelet[2465]: E0714 21:47:19.574581 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.574771 kubelet[2465]: E0714 21:47:19.574761 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.574771 kubelet[2465]: W0714 21:47:19.574771 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.574829 kubelet[2465]: E0714 21:47:19.574779 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.574970 kubelet[2465]: E0714 21:47:19.574960 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.574970 kubelet[2465]: W0714 21:47:19.574969 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.575032 kubelet[2465]: E0714 21:47:19.574979 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.575146 kubelet[2465]: E0714 21:47:19.575135 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.575146 kubelet[2465]: W0714 21:47:19.575144 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.575206 kubelet[2465]: E0714 21:47:19.575151 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.575310 kubelet[2465]: E0714 21:47:19.575299 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.575310 kubelet[2465]: W0714 21:47:19.575309 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.575369 kubelet[2465]: E0714 21:47:19.575316 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.575544 kubelet[2465]: E0714 21:47:19.575532 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.575544 kubelet[2465]: W0714 21:47:19.575542 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.575610 kubelet[2465]: E0714 21:47:19.575551 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.575964 kubelet[2465]: E0714 21:47:19.575858 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.575964 kubelet[2465]: W0714 21:47:19.575877 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.575964 kubelet[2465]: E0714 21:47:19.575891 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.576260 kubelet[2465]: E0714 21:47:19.576119 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.576260 kubelet[2465]: W0714 21:47:19.576130 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.576260 kubelet[2465]: E0714 21:47:19.576143 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.576418 kubelet[2465]: E0714 21:47:19.576404 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.576497 kubelet[2465]: W0714 21:47:19.576485 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.576559 kubelet[2465]: E0714 21:47:19.576547 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.576856 kubelet[2465]: E0714 21:47:19.576824 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.576856 kubelet[2465]: W0714 21:47:19.576839 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.576856 kubelet[2465]: E0714 21:47:19.576849 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.577111 kubelet[2465]: E0714 21:47:19.576989 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.577111 kubelet[2465]: W0714 21:47:19.576998 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.577111 kubelet[2465]: E0714 21:47:19.577005 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.577209 kubelet[2465]: E0714 21:47:19.577151 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.577209 kubelet[2465]: W0714 21:47:19.577159 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.577209 kubelet[2465]: E0714 21:47:19.577166 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.577347 kubelet[2465]: E0714 21:47:19.577332 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.577347 kubelet[2465]: W0714 21:47:19.577342 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.577403 kubelet[2465]: E0714 21:47:19.577350 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.577794 kubelet[2465]: E0714 21:47:19.577679 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.577794 kubelet[2465]: W0714 21:47:19.577696 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.577794 kubelet[2465]: E0714 21:47:19.577707 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.578013 kubelet[2465]: E0714 21:47:19.577969 2465 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:47:19.578013 kubelet[2465]: W0714 21:47:19.577982 2465 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:47:19.578013 kubelet[2465]: E0714 21:47:19.577992 2465 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:47:19.914226 containerd[1439]: time="2025-07-14T21:47:19.914180927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:19.915519 containerd[1439]: time="2025-07-14T21:47:19.914527168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 14 21:47:19.915519 containerd[1439]: time="2025-07-14T21:47:19.915508331Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:19.917727 containerd[1439]: time="2025-07-14T21:47:19.917665337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:19.918388 containerd[1439]: time="2025-07-14T21:47:19.918252859Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.157849512s" Jul 14 21:47:19.918388 containerd[1439]: time="2025-07-14T21:47:19.918286499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 14 21:47:19.921911 containerd[1439]: time="2025-07-14T21:47:19.921882030Z" level=info msg="CreateContainer within sandbox \"db1bc7272309c6d75805b37ce95306878e66f1e7e0223ce583c85798dd017e5a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 14 21:47:19.938153 containerd[1439]: time="2025-07-14T21:47:19.938039078Z" level=info msg="CreateContainer within sandbox \"db1bc7272309c6d75805b37ce95306878e66f1e7e0223ce583c85798dd017e5a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"47eab4d98b648f316c8837853f8f025455d401d6ec8e187e5b68d8435a0ad08a\"" Jul 14 21:47:19.938533 containerd[1439]: time="2025-07-14T21:47:19.938508080Z" level=info msg="StartContainer for \"47eab4d98b648f316c8837853f8f025455d401d6ec8e187e5b68d8435a0ad08a\"" Jul 14 21:47:19.970630 systemd[1]: Started cri-containerd-47eab4d98b648f316c8837853f8f025455d401d6ec8e187e5b68d8435a0ad08a.scope - libcontainer container 47eab4d98b648f316c8837853f8f025455d401d6ec8e187e5b68d8435a0ad08a. Jul 14 21:47:20.001066 containerd[1439]: time="2025-07-14T21:47:20.001004107Z" level=info msg="StartContainer for \"47eab4d98b648f316c8837853f8f025455d401d6ec8e187e5b68d8435a0ad08a\" returns successfully" Jul 14 21:47:20.030957 systemd[1]: cri-containerd-47eab4d98b648f316c8837853f8f025455d401d6ec8e187e5b68d8435a0ad08a.scope: Deactivated successfully. Jul 14 21:47:20.050132 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47eab4d98b648f316c8837853f8f025455d401d6ec8e187e5b68d8435a0ad08a-rootfs.mount: Deactivated successfully. Jul 14 21:47:20.060148 containerd[1439]: time="2025-07-14T21:47:20.059931757Z" level=info msg="shim disconnected" id=47eab4d98b648f316c8837853f8f025455d401d6ec8e187e5b68d8435a0ad08a namespace=k8s.io Jul 14 21:47:20.060148 containerd[1439]: time="2025-07-14T21:47:20.059985997Z" level=warning msg="cleaning up after shim disconnected" id=47eab4d98b648f316c8837853f8f025455d401d6ec8e187e5b68d8435a0ad08a namespace=k8s.io Jul 14 21:47:20.060148 containerd[1439]: time="2025-07-14T21:47:20.059995597Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:47:20.417104 kubelet[2465]: E0714 21:47:20.417037 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bcc6m" podUID="2c011bf7-d865-42c4-a2c0-d53c4ee5f22f" Jul 14 21:47:20.487848 kubelet[2465]: I0714 21:47:20.486733 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 21:47:20.487848 kubelet[2465]: E0714 21:47:20.487040 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:47:20.488768 containerd[1439]: time="2025-07-14T21:47:20.488721950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 14 21:47:22.417072 kubelet[2465]: E0714 21:47:22.417013 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bcc6m" podUID="2c011bf7-d865-42c4-a2c0-d53c4ee5f22f" Jul 14 21:47:22.887399 containerd[1439]: time="2025-07-14T21:47:22.887338278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:22.888082 containerd[1439]: time="2025-07-14T21:47:22.888039200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 14 21:47:22.888879 containerd[1439]: time="2025-07-14T21:47:22.888850762Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:22.891190 containerd[1439]: time="2025-07-14T21:47:22.891115008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:22.892469 containerd[1439]: time="2025-07-14T21:47:22.892061930Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.40318938s" Jul 14 21:47:22.892469 containerd[1439]: time="2025-07-14T21:47:22.892111851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 14 21:47:22.899040 containerd[1439]: time="2025-07-14T21:47:22.898986909Z" level=info msg="CreateContainer within sandbox \"db1bc7272309c6d75805b37ce95306878e66f1e7e0223ce583c85798dd017e5a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 14 21:47:22.916050 containerd[1439]: time="2025-07-14T21:47:22.915907153Z" level=info msg="CreateContainer within sandbox \"db1bc7272309c6d75805b37ce95306878e66f1e7e0223ce583c85798dd017e5a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7d5a8535c08d391142318a2512690368edf208e7cd4f1b4596b8e3f67586a896\"" Jul 14 21:47:22.917527 containerd[1439]: time="2025-07-14T21:47:22.917482838Z" level=info msg="StartContainer for \"7d5a8535c08d391142318a2512690368edf208e7cd4f1b4596b8e3f67586a896\"" Jul 14 21:47:22.956704 systemd[1]: Started cri-containerd-7d5a8535c08d391142318a2512690368edf208e7cd4f1b4596b8e3f67586a896.scope - libcontainer container 7d5a8535c08d391142318a2512690368edf208e7cd4f1b4596b8e3f67586a896. Jul 14 21:47:22.983764 containerd[1439]: time="2025-07-14T21:47:22.983703813Z" level=info msg="StartContainer for \"7d5a8535c08d391142318a2512690368edf208e7cd4f1b4596b8e3f67586a896\" returns successfully" Jul 14 21:47:23.587335 systemd[1]: cri-containerd-7d5a8535c08d391142318a2512690368edf208e7cd4f1b4596b8e3f67586a896.scope: Deactivated successfully. Jul 14 21:47:23.610166 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d5a8535c08d391142318a2512690368edf208e7cd4f1b4596b8e3f67586a896-rootfs.mount: Deactivated successfully. Jul 14 21:47:23.683577 containerd[1439]: time="2025-07-14T21:47:23.683515428Z" level=info msg="shim disconnected" id=7d5a8535c08d391142318a2512690368edf208e7cd4f1b4596b8e3f67586a896 namespace=k8s.io Jul 14 21:47:23.683577 containerd[1439]: time="2025-07-14T21:47:23.683567788Z" level=warning msg="cleaning up after shim disconnected" id=7d5a8535c08d391142318a2512690368edf208e7cd4f1b4596b8e3f67586a896 namespace=k8s.io Jul 14 21:47:23.683577 containerd[1439]: time="2025-07-14T21:47:23.683575948Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:47:23.684586 kubelet[2465]: I0714 21:47:23.684275 2465 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 14 21:47:23.731301 systemd[1]: Created slice kubepods-burstable-pod7aeaa372_8400_4d54_bcde_fb86f1edd957.slice - libcontainer container kubepods-burstable-pod7aeaa372_8400_4d54_bcde_fb86f1edd957.slice. Jul 14 21:47:23.741558 systemd[1]: Created slice kubepods-burstable-pod8a6a9630_19de_4934_990f_1a0a5b55fdcd.slice - libcontainer container kubepods-burstable-pod8a6a9630_19de_4934_990f_1a0a5b55fdcd.slice. Jul 14 21:47:23.753293 systemd[1]: Created slice kubepods-besteffort-podcaacf5f1_e0ed_4877_bf6f_031cb7eea2e7.slice - libcontainer container kubepods-besteffort-podcaacf5f1_e0ed_4877_bf6f_031cb7eea2e7.slice. Jul 14 21:47:23.759235 systemd[1]: Created slice kubepods-besteffort-pod63909ecb_5ac2_4278_909f_4d78ae798ccd.slice - libcontainer container kubepods-besteffort-pod63909ecb_5ac2_4278_909f_4d78ae798ccd.slice. Jul 14 21:47:23.766393 systemd[1]: Created slice kubepods-besteffort-pod7b023aed_807c_42ef_982d_e9e0dbb828c3.slice - libcontainer container kubepods-besteffort-pod7b023aed_807c_42ef_982d_e9e0dbb828c3.slice. Jul 14 21:47:23.773219 systemd[1]: Created slice kubepods-besteffort-pode89124a2_60e5_49d1_b652_c0b685dfb2cc.slice - libcontainer container kubepods-besteffort-pode89124a2_60e5_49d1_b652_c0b685dfb2cc.slice. Jul 14 21:47:23.777665 systemd[1]: Created slice kubepods-besteffort-pod51cd5ebd_5963_4ce2_ab69_bebc4a3c6e81.slice - libcontainer container kubepods-besteffort-pod51cd5ebd_5963_4ce2_ab69_bebc4a3c6e81.slice. Jul 14 21:47:23.804012 kubelet[2465]: I0714 21:47:23.803972 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndgzb\" (UniqueName: \"kubernetes.io/projected/51cd5ebd-5963-4ce2-ab69-bebc4a3c6e81-kube-api-access-ndgzb\") pod \"calico-apiserver-64779d767d-x5ktd\" (UID: \"51cd5ebd-5963-4ce2-ab69-bebc4a3c6e81\") " pod="calico-apiserver/calico-apiserver-64779d767d-x5ktd" Jul 14 21:47:23.804211 kubelet[2465]: I0714 21:47:23.804195 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7b023aed-807c-42ef-982d-e9e0dbb828c3-goldmane-key-pair\") pod \"goldmane-768f4c5c69-tbxpx\" (UID: \"7b023aed-807c-42ef-982d-e9e0dbb828c3\") " pod="calico-system/goldmane-768f4c5c69-tbxpx" Jul 14 21:47:23.804281 kubelet[2465]: I0714 21:47:23.804269 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e89124a2-60e5-49d1-b652-c0b685dfb2cc-whisker-backend-key-pair\") pod \"whisker-74b74ffbd9-p9kqm\" (UID: \"e89124a2-60e5-49d1-b652-c0b685dfb2cc\") " pod="calico-system/whisker-74b74ffbd9-p9kqm" Jul 14 21:47:23.804489 kubelet[2465]: I0714 21:47:23.804352 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwhwn\" (UniqueName: \"kubernetes.io/projected/e89124a2-60e5-49d1-b652-c0b685dfb2cc-kube-api-access-qwhwn\") pod \"whisker-74b74ffbd9-p9kqm\" (UID: \"e89124a2-60e5-49d1-b652-c0b685dfb2cc\") " pod="calico-system/whisker-74b74ffbd9-p9kqm" Jul 14 21:47:23.804489 kubelet[2465]: I0714 21:47:23.804373 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aeaa372-8400-4d54-bcde-fb86f1edd957-config-volume\") pod \"coredns-674b8bbfcf-99stm\" (UID: \"7aeaa372-8400-4d54-bcde-fb86f1edd957\") " pod="kube-system/coredns-674b8bbfcf-99stm" Jul 14 21:47:23.804489 kubelet[2465]: I0714 21:47:23.804395 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/63909ecb-5ac2-4278-909f-4d78ae798ccd-calico-apiserver-certs\") pod \"calico-apiserver-64779d767d-8dt8r\" (UID: \"63909ecb-5ac2-4278-909f-4d78ae798ccd\") " pod="calico-apiserver/calico-apiserver-64779d767d-8dt8r" Jul 14 21:47:23.804489 kubelet[2465]: I0714 21:47:23.804422 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99s87\" (UniqueName: \"kubernetes.io/projected/7b023aed-807c-42ef-982d-e9e0dbb828c3-kube-api-access-99s87\") pod \"goldmane-768f4c5c69-tbxpx\" (UID: \"7b023aed-807c-42ef-982d-e9e0dbb828c3\") " pod="calico-system/goldmane-768f4c5c69-tbxpx" Jul 14 21:47:23.804489 kubelet[2465]: I0714 21:47:23.804460 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5gkw\" (UniqueName: \"kubernetes.io/projected/caacf5f1-e0ed-4877-bf6f-031cb7eea2e7-kube-api-access-d5gkw\") pod \"calico-kube-controllers-59964484c9-cfz2t\" (UID: \"caacf5f1-e0ed-4877-bf6f-031cb7eea2e7\") " pod="calico-system/calico-kube-controllers-59964484c9-cfz2t" Jul 14 21:47:23.804645 kubelet[2465]: I0714 21:47:23.804511 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e89124a2-60e5-49d1-b652-c0b685dfb2cc-whisker-ca-bundle\") pod \"whisker-74b74ffbd9-p9kqm\" (UID: \"e89124a2-60e5-49d1-b652-c0b685dfb2cc\") " pod="calico-system/whisker-74b74ffbd9-p9kqm" Jul 14 21:47:23.804645 kubelet[2465]: I0714 21:47:23.804542 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7qv4\" (UniqueName: \"kubernetes.io/projected/63909ecb-5ac2-4278-909f-4d78ae798ccd-kube-api-access-d7qv4\") pod \"calico-apiserver-64779d767d-8dt8r\" (UID: \"63909ecb-5ac2-4278-909f-4d78ae798ccd\") " pod="calico-apiserver/calico-apiserver-64779d767d-8dt8r" Jul 14 21:47:23.804645 kubelet[2465]: I0714 21:47:23.804558 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/caacf5f1-e0ed-4877-bf6f-031cb7eea2e7-tigera-ca-bundle\") pod \"calico-kube-controllers-59964484c9-cfz2t\" (UID: \"caacf5f1-e0ed-4877-bf6f-031cb7eea2e7\") " pod="calico-system/calico-kube-controllers-59964484c9-cfz2t" Jul 14 21:47:23.805917 kubelet[2465]: I0714 21:47:23.804584 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a6a9630-19de-4934-990f-1a0a5b55fdcd-config-volume\") pod \"coredns-674b8bbfcf-n28f9\" (UID: \"8a6a9630-19de-4934-990f-1a0a5b55fdcd\") " pod="kube-system/coredns-674b8bbfcf-n28f9" Jul 14 21:47:23.805971 kubelet[2465]: I0714 21:47:23.805945 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlj2j\" (UniqueName: \"kubernetes.io/projected/8a6a9630-19de-4934-990f-1a0a5b55fdcd-kube-api-access-qlj2j\") pod \"coredns-674b8bbfcf-n28f9\" (UID: \"8a6a9630-19de-4934-990f-1a0a5b55fdcd\") " pod="kube-system/coredns-674b8bbfcf-n28f9" Jul 14 21:47:23.806049 kubelet[2465]: I0714 21:47:23.805969 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/51cd5ebd-5963-4ce2-ab69-bebc4a3c6e81-calico-apiserver-certs\") pod \"calico-apiserver-64779d767d-x5ktd\" (UID: \"51cd5ebd-5963-4ce2-ab69-bebc4a3c6e81\") " pod="calico-apiserver/calico-apiserver-64779d767d-x5ktd" Jul 14 21:47:23.806049 kubelet[2465]: I0714 21:47:23.805997 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b023aed-807c-42ef-982d-e9e0dbb828c3-config\") pod \"goldmane-768f4c5c69-tbxpx\" (UID: \"7b023aed-807c-42ef-982d-e9e0dbb828c3\") " pod="calico-system/goldmane-768f4c5c69-tbxpx" Jul 14 21:47:23.806049 kubelet[2465]: I0714 21:47:23.806013 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b023aed-807c-42ef-982d-e9e0dbb828c3-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-tbxpx\" (UID: \"7b023aed-807c-42ef-982d-e9e0dbb828c3\") " pod="calico-system/goldmane-768f4c5c69-tbxpx" Jul 14 21:47:23.806049 kubelet[2465]: I0714 21:47:23.806031 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5hst\" (UniqueName: \"kubernetes.io/projected/7aeaa372-8400-4d54-bcde-fb86f1edd957-kube-api-access-l5hst\") pod \"coredns-674b8bbfcf-99stm\" (UID: \"7aeaa372-8400-4d54-bcde-fb86f1edd957\") " pod="kube-system/coredns-674b8bbfcf-99stm" Jul 14 21:47:24.038222 kubelet[2465]: E0714 21:47:24.038099 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:47:24.038721 containerd[1439]: time="2025-07-14T21:47:24.038667884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-99stm,Uid:7aeaa372-8400-4d54-bcde-fb86f1edd957,Namespace:kube-system,Attempt:0,}" Jul 14 21:47:24.047813 kubelet[2465]: E0714 21:47:24.047766 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:47:24.048388 containerd[1439]: time="2025-07-14T21:47:24.048337748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n28f9,Uid:8a6a9630-19de-4934-990f-1a0a5b55fdcd,Namespace:kube-system,Attempt:0,}" Jul 14 21:47:24.060080 containerd[1439]: time="2025-07-14T21:47:24.059947016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59964484c9-cfz2t,Uid:caacf5f1-e0ed-4877-bf6f-031cb7eea2e7,Namespace:calico-system,Attempt:0,}" Jul 14 21:47:24.064283 containerd[1439]: time="2025-07-14T21:47:24.064101346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64779d767d-8dt8r,Uid:63909ecb-5ac2-4278-909f-4d78ae798ccd,Namespace:calico-apiserver,Attempt:0,}" Jul 14 21:47:24.070554 containerd[1439]: time="2025-07-14T21:47:24.070509442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-tbxpx,Uid:7b023aed-807c-42ef-982d-e9e0dbb828c3,Namespace:calico-system,Attempt:0,}" Jul 14 21:47:24.086840 containerd[1439]: time="2025-07-14T21:47:24.086795562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74b74ffbd9-p9kqm,Uid:e89124a2-60e5-49d1-b652-c0b685dfb2cc,Namespace:calico-system,Attempt:0,}" Jul 14 21:47:24.087088 containerd[1439]: time="2025-07-14T21:47:24.087065042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64779d767d-x5ktd,Uid:51cd5ebd-5963-4ce2-ab69-bebc4a3c6e81,Namespace:calico-apiserver,Attempt:0,}" Jul 14 21:47:24.427797 systemd[1]: Created slice kubepods-besteffort-pod2c011bf7_d865_42c4_a2c0_d53c4ee5f22f.slice - libcontainer container kubepods-besteffort-pod2c011bf7_d865_42c4_a2c0_d53c4ee5f22f.slice. Jul 14 21:47:24.441758 containerd[1439]: time="2025-07-14T21:47:24.439429940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bcc6m,Uid:2c011bf7-d865-42c4-a2c0-d53c4ee5f22f,Namespace:calico-system,Attempt:0,}" Jul 14 21:47:24.527774 containerd[1439]: time="2025-07-14T21:47:24.527732595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 14 21:47:24.582478 containerd[1439]: time="2025-07-14T21:47:24.582384688Z" level=error msg="Failed to destroy network for sandbox \"c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:24.585210 containerd[1439]: time="2025-07-14T21:47:24.585159615Z" level=error msg="encountered an error cleaning up failed sandbox \"c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:24.585541 containerd[1439]: time="2025-07-14T21:47:24.585510816Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-99stm,Uid:7aeaa372-8400-4d54-bcde-fb86f1edd957,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:24.586000 kubelet[2465]: E0714 21:47:24.585944 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:24.586119 kubelet[2465]: E0714 21:47:24.586031 2465 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-99stm" Jul 14 21:47:24.586119 kubelet[2465]: E0714 21:47:24.586062 2465 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-99stm" Jul 14 21:47:24.586176 kubelet[2465]: E0714 21:47:24.586124 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-99stm_kube-system(7aeaa372-8400-4d54-bcde-fb86f1edd957)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-99stm_kube-system(7aeaa372-8400-4d54-bcde-fb86f1edd957)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-99stm" podUID="7aeaa372-8400-4d54-bcde-fb86f1edd957" Jul 14 21:47:24.591029 containerd[1439]: time="2025-07-14T21:47:24.590976269Z" level=error msg="Failed to destroy network for sandbox \"f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:24.591451 containerd[1439]: time="2025-07-14T21:47:24.591403750Z" level=error msg="Failed to destroy network for sandbox \"8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:24.591738 containerd[1439]: time="2025-07-14T21:47:24.591704791Z" level=error msg="encountered an error cleaning up failed sandbox \"f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:24.591864 containerd[1439]: time="2025-07-14T21:47:24.591839671Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64779d767d-8dt8r,Uid:63909ecb-5ac2-4278-909f-4d78ae798ccd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:24.592115 containerd[1439]: time="2025-07-14T21:47:24.591760991Z" level=error msg="encountered an error cleaning up failed sandbox \"8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:24.592115 containerd[1439]: time="2025-07-14T21:47:24.592034032Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n28f9,Uid:8a6a9630-19de-4934-990f-1a0a5b55fdcd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:24.592353 kubelet[2465]: E0714 21:47:24.592305 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:24.592413 kubelet[2465]: E0714 21:47:24.592358 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:24.592504 kubelet[2465]: E0714 21:47:24.592421 2465 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64779d767d-8dt8r" Jul 14 21:47:24.592504 kubelet[2465]: E0714 21:47:24.592450 2465 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64779d767d-8dt8r" Jul 14 21:47:24.592504 kubelet[2465]: E0714 21:47:24.592373 2465 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-n28f9" Jul 14 21:47:24.592593 kubelet[2465]: E0714 21:47:24.592499 2465 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-n28f9" Jul 14 21:47:24.592593 kubelet[2465]: E0714 21:47:24.592507 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64779d767d-8dt8r_calico-apiserver(63909ecb-5ac2-4278-909f-4d78ae798ccd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64779d767d-8dt8r_calico-apiserver(63909ecb-5ac2-4278-909f-4d78ae798ccd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64779d767d-8dt8r" podUID="63909ecb-5ac2-4278-909f-4d78ae798ccd" Jul 14 21:47:24.592593 kubelet[2465]: E0714 21:47:24.592568 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-n28f9_kube-system(8a6a9630-19de-4934-990f-1a0a5b55fdcd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-n28f9_kube-system(8a6a9630-19de-4934-990f-1a0a5b55fdcd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-n28f9" podUID="8a6a9630-19de-4934-990f-1a0a5b55fdcd" Jul 14 21:47:24.594512 containerd[1439]: time="2025-07-14T21:47:24.594314677Z" level=error msg="Failed to destroy network for sandbox \"3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:24.595660 containerd[1439]: time="2025-07-14T21:47:24.595598240Z" level=error msg="encountered an error cleaning up failed sandbox \"3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:24.595740 containerd[1439]: time="2025-07-14T21:47:24.595663321Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59964484c9-cfz2t,Uid:caacf5f1-e0ed-4877-bf6f-031cb7eea2e7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:24.595995 kubelet[2465]: E0714 21:47:24.595924 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:24.596057 kubelet[2465]: E0714 21:47:24.596005 2465 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59964484c9-cfz2t" Jul 14 21:47:24.596057 kubelet[2465]: E0714 21:47:24.596027 2465 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59964484c9-cfz2t" Jul 14 21:47:24.596124 kubelet[2465]: E0714 21:47:24.596099 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-59964484c9-cfz2t_calico-system(caacf5f1-e0ed-4877-bf6f-031cb7eea2e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-59964484c9-cfz2t_calico-system(caacf5f1-e0ed-4877-bf6f-031cb7eea2e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59964484c9-cfz2t" podUID="caacf5f1-e0ed-4877-bf6f-031cb7eea2e7" Jul 14 21:47:24.604528 containerd[1439]: time="2025-07-14T21:47:24.604238061Z" level=error msg="Failed to destroy network for sandbox \"7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:24.604770 containerd[1439]: time="2025-07-14T21:47:24.604622502Z" level=error msg="encountered an error cleaning up failed sandbox \"7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:24.605553 containerd[1439]: time="2025-07-14T21:47:24.605506825Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64779d767d-x5ktd,Uid:51cd5ebd-5963-4ce2-ab69-bebc4a3c6e81,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:24.606000 kubelet[2465]: E0714 21:47:24.605788 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:24.606000 kubelet[2465]: E0714 21:47:24.605893 2465 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64779d767d-x5ktd" Jul 14 21:47:24.606000 kubelet[2465]: E0714 21:47:24.605914 2465 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64779d767d-x5ktd" Jul 14 21:47:24.606139 kubelet[2465]: E0714 21:47:24.605963 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64779d767d-x5ktd_calico-apiserver(51cd5ebd-5963-4ce2-ab69-bebc4a3c6e81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64779d767d-x5ktd_calico-apiserver(51cd5ebd-5963-4ce2-ab69-bebc4a3c6e81)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64779d767d-x5ktd" podUID="51cd5ebd-5963-4ce2-ab69-bebc4a3c6e81" Jul 14 21:47:24.611625 containerd[1439]: time="2025-07-14T21:47:24.611580039Z" level=error msg="Failed to destroy network for sandbox \"4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:24.612425 containerd[1439]: time="2025-07-14T21:47:24.612222241Z" level=error msg="encountered an error cleaning up failed sandbox \"4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:24.612425 containerd[1439]: time="2025-07-14T21:47:24.612327681Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-tbxpx,Uid:7b023aed-807c-42ef-982d-e9e0dbb828c3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:24.613191 kubelet[2465]: E0714 21:47:24.612873 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:24.613246 kubelet[2465]: E0714 21:47:24.613209 2465 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-tbxpx" Jul 14 21:47:24.613246 kubelet[2465]: E0714 21:47:24.613229 2465 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-tbxpx" Jul 14 21:47:24.613306 kubelet[2465]: E0714 21:47:24.613278 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-tbxpx_calico-system(7b023aed-807c-42ef-982d-e9e0dbb828c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-tbxpx_calico-system(7b023aed-807c-42ef-982d-e9e0dbb828c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-tbxpx" podUID="7b023aed-807c-42ef-982d-e9e0dbb828c3" Jul 14 21:47:24.614005 containerd[1439]: time="2025-07-14T21:47:24.613973045Z" level=error msg="Failed to destroy network for sandbox \"288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:24.614410 containerd[1439]: time="2025-07-14T21:47:24.614366246Z" level=error msg="encountered an error cleaning up failed sandbox \"288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:24.614475 containerd[1439]: time="2025-07-14T21:47:24.614452926Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74b74ffbd9-p9kqm,Uid:e89124a2-60e5-49d1-b652-c0b685dfb2cc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:24.614824 kubelet[2465]: E0714 21:47:24.614659 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:24.614824 kubelet[2465]: E0714 21:47:24.614711 2465 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-74b74ffbd9-p9kqm" Jul 14 21:47:24.614824 kubelet[2465]: E0714 21:47:24.614727 2465 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-74b74ffbd9-p9kqm" Jul 14 21:47:24.614932 kubelet[2465]: E0714 21:47:24.614774 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-74b74ffbd9-p9kqm_calico-system(e89124a2-60e5-49d1-b652-c0b685dfb2cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-74b74ffbd9-p9kqm_calico-system(e89124a2-60e5-49d1-b652-c0b685dfb2cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-74b74ffbd9-p9kqm" podUID="e89124a2-60e5-49d1-b652-c0b685dfb2cc" Jul 14 21:47:24.627466 containerd[1439]: time="2025-07-14T21:47:24.627381038Z" level=error msg="Failed to destroy network for sandbox \"4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:24.628433 containerd[1439]: time="2025-07-14T21:47:24.627733759Z" level=error msg="encountered an error cleaning up failed sandbox \"4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:24.628433 containerd[1439]: time="2025-07-14T21:47:24.627788919Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bcc6m,Uid:2c011bf7-d865-42c4-a2c0-d53c4ee5f22f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:24.628665 kubelet[2465]: E0714 21:47:24.627985 2465 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:24.628665 kubelet[2465]: E0714 21:47:24.628037 2465 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bcc6m" Jul 14 21:47:24.628665 kubelet[2465]: E0714 21:47:24.628067 2465 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bcc6m" Jul 14 21:47:24.628823 kubelet[2465]: E0714 21:47:24.628116 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bcc6m_calico-system(2c011bf7-d865-42c4-a2c0-d53c4ee5f22f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bcc6m_calico-system(2c011bf7-d865-42c4-a2c0-d53c4ee5f22f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bcc6m" podUID="2c011bf7-d865-42c4-a2c0-d53c4ee5f22f" Jul 14 21:47:25.529642 kubelet[2465]: I0714 21:47:25.528940 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" Jul 14 21:47:25.530023 containerd[1439]: time="2025-07-14T21:47:25.529884626Z" level=info msg="StopPodSandbox for \"3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d\"" Jul 14 21:47:25.530223 containerd[1439]: time="2025-07-14T21:47:25.530067586Z" level=info msg="Ensure that sandbox 3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d in task-service has been cleanup successfully" Jul 14 21:47:25.530810 kubelet[2465]: I0714 21:47:25.530563 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" Jul 14 21:47:25.531948 containerd[1439]: time="2025-07-14T21:47:25.531890751Z" level=info msg="StopPodSandbox for \"4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d\"" Jul 14 21:47:25.532164 containerd[1439]: time="2025-07-14T21:47:25.532115471Z" level=info msg="Ensure that sandbox 4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d in task-service has been cleanup successfully" Jul 14 21:47:25.532622 kubelet[2465]: I0714 21:47:25.532390 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" Jul 14 21:47:25.533572 containerd[1439]: time="2025-07-14T21:47:25.533534154Z" level=info msg="StopPodSandbox for \"8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844\"" Jul 14 21:47:25.534079 containerd[1439]: time="2025-07-14T21:47:25.533938075Z" level=info msg="Ensure that sandbox 8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844 in task-service has been cleanup successfully" Jul 14 21:47:25.535306 kubelet[2465]: I0714 21:47:25.534608 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" Jul 14 21:47:25.535700 containerd[1439]: time="2025-07-14T21:47:25.535665359Z" level=info msg="StopPodSandbox for \"4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468\"" Jul 14 21:47:25.535858 containerd[1439]: time="2025-07-14T21:47:25.535836760Z" level=info msg="Ensure that sandbox 4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468 in task-service has been cleanup successfully" Jul 14 21:47:25.538313 kubelet[2465]: I0714 21:47:25.537935 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" Jul 14 21:47:25.539120 containerd[1439]: time="2025-07-14T21:47:25.539074927Z" level=info msg="StopPodSandbox for \"7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15\"" Jul 14 21:47:25.539318 containerd[1439]: time="2025-07-14T21:47:25.539292448Z" level=info msg="Ensure that sandbox 7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15 in task-service has been cleanup successfully" Jul 14 21:47:25.540858 kubelet[2465]: I0714 21:47:25.540823 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" Jul 14 21:47:25.542168 containerd[1439]: time="2025-07-14T21:47:25.541991294Z" level=info msg="StopPodSandbox for \"288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820\"" Jul 14 21:47:25.542456 containerd[1439]: time="2025-07-14T21:47:25.542365215Z" level=info msg="Ensure that sandbox 288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820 in task-service has been cleanup successfully" Jul 14 21:47:25.544153 kubelet[2465]: I0714 21:47:25.543179 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" Jul 14 21:47:25.544219 containerd[1439]: time="2025-07-14T21:47:25.543751538Z" level=info msg="StopPodSandbox for \"f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64\"" Jul 14 21:47:25.544219 containerd[1439]: time="2025-07-14T21:47:25.543917659Z" level=info msg="Ensure that sandbox f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64 in task-service has been cleanup successfully" Jul 14 21:47:25.546514 kubelet[2465]: I0714 21:47:25.546484 2465 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" Jul 14 21:47:25.548788 containerd[1439]: time="2025-07-14T21:47:25.548751270Z" level=info msg="StopPodSandbox for \"c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf\"" Jul 14 21:47:25.548963 containerd[1439]: time="2025-07-14T21:47:25.548937910Z" level=info msg="Ensure that sandbox c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf in task-service has been cleanup successfully" Jul 14 21:47:25.580657 containerd[1439]: time="2025-07-14T21:47:25.580607425Z" level=error msg="StopPodSandbox for \"3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d\" failed" error="failed to destroy network for sandbox \"3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:25.581138 kubelet[2465]: E0714 21:47:25.580972 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" Jul 14 21:47:25.585696 kubelet[2465]: E0714 21:47:25.585617 2465 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d"} Jul 14 21:47:25.585954 kubelet[2465]: E0714 21:47:25.585865 2465 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"caacf5f1-e0ed-4877-bf6f-031cb7eea2e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 21:47:25.585954 kubelet[2465]: E0714 21:47:25.585914 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"caacf5f1-e0ed-4877-bf6f-031cb7eea2e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59964484c9-cfz2t" podUID="caacf5f1-e0ed-4877-bf6f-031cb7eea2e7" Jul 14 21:47:25.601862 containerd[1439]: time="2025-07-14T21:47:25.600960832Z" level=error msg="StopPodSandbox for \"4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d\" failed" error="failed to destroy network for sandbox \"4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:25.601990 kubelet[2465]: E0714 21:47:25.601595 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" Jul 14 21:47:25.601990 kubelet[2465]: E0714 21:47:25.601652 2465 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d"} Jul 14 21:47:25.601990 kubelet[2465]: E0714 21:47:25.601683 2465 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b023aed-807c-42ef-982d-e9e0dbb828c3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 21:47:25.601990 kubelet[2465]: E0714 21:47:25.601708 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b023aed-807c-42ef-982d-e9e0dbb828c3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-tbxpx" podUID="7b023aed-807c-42ef-982d-e9e0dbb828c3" Jul 14 21:47:25.622653 containerd[1439]: time="2025-07-14T21:47:25.622595003Z" level=error msg="StopPodSandbox for \"7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15\" failed" error="failed to destroy network for sandbox \"7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:25.623094 kubelet[2465]: E0714 21:47:25.622910 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" Jul 14 21:47:25.623094 kubelet[2465]: E0714 21:47:25.622991 2465 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15"} Jul 14 21:47:25.623094 kubelet[2465]: E0714 21:47:25.623026 2465 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"51cd5ebd-5963-4ce2-ab69-bebc4a3c6e81\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 21:47:25.623094 kubelet[2465]: E0714 21:47:25.623061 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"51cd5ebd-5963-4ce2-ab69-bebc4a3c6e81\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64779d767d-x5ktd" podUID="51cd5ebd-5963-4ce2-ab69-bebc4a3c6e81" Jul 14 21:47:25.626213 containerd[1439]: time="2025-07-14T21:47:25.626162171Z" level=error msg="StopPodSandbox for \"288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820\" failed" error="failed to destroy network for sandbox \"288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:25.626420 kubelet[2465]: E0714 21:47:25.626379 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" Jul 14 21:47:25.626491 kubelet[2465]: E0714 21:47:25.626450 2465 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820"} Jul 14 21:47:25.626519 kubelet[2465]: E0714 21:47:25.626487 2465 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e89124a2-60e5-49d1-b652-c0b685dfb2cc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 21:47:25.626575 kubelet[2465]: E0714 21:47:25.626515 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e89124a2-60e5-49d1-b652-c0b685dfb2cc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-74b74ffbd9-p9kqm" podUID="e89124a2-60e5-49d1-b652-c0b685dfb2cc" Jul 14 21:47:25.629410 containerd[1439]: time="2025-07-14T21:47:25.629354499Z" level=error msg="StopPodSandbox for \"8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844\" failed" error="failed to destroy network for sandbox \"8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:25.629673 kubelet[2465]: E0714 21:47:25.629634 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" Jul 14 21:47:25.629735 kubelet[2465]: E0714 21:47:25.629683 2465 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844"} Jul 14 21:47:25.629735 kubelet[2465]: E0714 21:47:25.629719 2465 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8a6a9630-19de-4934-990f-1a0a5b55fdcd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 21:47:25.629823 kubelet[2465]: E0714 21:47:25.629743 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8a6a9630-19de-4934-990f-1a0a5b55fdcd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-n28f9" podUID="8a6a9630-19de-4934-990f-1a0a5b55fdcd" Jul 14 21:47:25.632051 containerd[1439]: time="2025-07-14T21:47:25.631990745Z" level=error msg="StopPodSandbox for \"f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64\" failed" error="failed to destroy network for sandbox \"f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:25.632474 kubelet[2465]: E0714 21:47:25.632400 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" Jul 14 21:47:25.632539 kubelet[2465]: E0714 21:47:25.632491 2465 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64"} Jul 14 21:47:25.632539 kubelet[2465]: E0714 21:47:25.632528 2465 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"63909ecb-5ac2-4278-909f-4d78ae798ccd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 21:47:25.632628 kubelet[2465]: E0714 21:47:25.632549 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"63909ecb-5ac2-4278-909f-4d78ae798ccd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64779d767d-8dt8r" podUID="63909ecb-5ac2-4278-909f-4d78ae798ccd" Jul 14 21:47:25.638893 containerd[1439]: time="2025-07-14T21:47:25.638836921Z" level=error msg="StopPodSandbox for \"c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf\" failed" error="failed to destroy network for sandbox \"c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:25.639495 kubelet[2465]: E0714 21:47:25.639076 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" Jul 14 21:47:25.639495 kubelet[2465]: E0714 21:47:25.639132 2465 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf"} Jul 14 21:47:25.639495 kubelet[2465]: E0714 21:47:25.639170 2465 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7aeaa372-8400-4d54-bcde-fb86f1edd957\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 21:47:25.639495 kubelet[2465]: E0714 21:47:25.639191 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7aeaa372-8400-4d54-bcde-fb86f1edd957\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-99stm" podUID="7aeaa372-8400-4d54-bcde-fb86f1edd957" Jul 14 21:47:25.640549 containerd[1439]: time="2025-07-14T21:47:25.640488365Z" level=error msg="StopPodSandbox for \"4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468\" failed" error="failed to destroy network for sandbox \"4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:47:25.640887 kubelet[2465]: E0714 21:47:25.640754 2465 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" Jul 14 21:47:25.640887 kubelet[2465]: E0714 21:47:25.640797 2465 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468"} Jul 14 21:47:25.640887 kubelet[2465]: E0714 21:47:25.640829 2465 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2c011bf7-d865-42c4-a2c0-d53c4ee5f22f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 21:47:25.640887 kubelet[2465]: E0714 21:47:25.640849 2465 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2c011bf7-d865-42c4-a2c0-d53c4ee5f22f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bcc6m" podUID="2c011bf7-d865-42c4-a2c0-d53c4ee5f22f" Jul 14 21:47:28.740542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount69108201.mount: Deactivated successfully. Jul 14 21:47:28.863821 containerd[1439]: time="2025-07-14T21:47:28.863756441Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:28.864520 containerd[1439]: time="2025-07-14T21:47:28.864472122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 14 21:47:28.865232 containerd[1439]: time="2025-07-14T21:47:28.865192724Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:28.876160 containerd[1439]: time="2025-07-14T21:47:28.876113027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:28.876886 containerd[1439]: time="2025-07-14T21:47:28.876630548Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.348852993s" Jul 14 21:47:28.876886 containerd[1439]: time="2025-07-14T21:47:28.876660148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 14 21:47:28.887049 containerd[1439]: time="2025-07-14T21:47:28.886857529Z" level=info msg="CreateContainer within sandbox \"db1bc7272309c6d75805b37ce95306878e66f1e7e0223ce583c85798dd017e5a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 14 21:47:28.899972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3147707184.mount: Deactivated successfully. Jul 14 21:47:28.920134 containerd[1439]: time="2025-07-14T21:47:28.920024399Z" level=info msg="CreateContainer within sandbox \"db1bc7272309c6d75805b37ce95306878e66f1e7e0223ce583c85798dd017e5a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e38f34e20d78f119af26e3e61d488ccd3dd3b0cdff0193a0886d0c598c16b018\"" Jul 14 21:47:28.920639 containerd[1439]: time="2025-07-14T21:47:28.920577240Z" level=info msg="StartContainer for \"e38f34e20d78f119af26e3e61d488ccd3dd3b0cdff0193a0886d0c598c16b018\"" Jul 14 21:47:28.986660 systemd[1]: Started cri-containerd-e38f34e20d78f119af26e3e61d488ccd3dd3b0cdff0193a0886d0c598c16b018.scope - libcontainer container e38f34e20d78f119af26e3e61d488ccd3dd3b0cdff0193a0886d0c598c16b018. Jul 14 21:47:29.203508 containerd[1439]: time="2025-07-14T21:47:29.203432218Z" level=info msg="StartContainer for \"e38f34e20d78f119af26e3e61d488ccd3dd3b0cdff0193a0886d0c598c16b018\" returns successfully" Jul 14 21:47:29.226022 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 14 21:47:29.226132 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 14 21:47:29.322386 containerd[1439]: time="2025-07-14T21:47:29.322334178Z" level=info msg="StopPodSandbox for \"288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820\"" Jul 14 21:47:29.575017 kubelet[2465]: I0714 21:47:29.574671 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6lwml" podStartSLOduration=1.836589866 podStartE2EDuration="13.574657368s" podCreationTimestamp="2025-07-14 21:47:16 +0000 UTC" firstStartedPulling="2025-07-14 21:47:17.139348087 +0000 UTC m=+19.825032208" lastFinishedPulling="2025-07-14 21:47:28.877415549 +0000 UTC m=+31.563099710" observedRunningTime="2025-07-14 21:47:29.574047847 +0000 UTC m=+32.259732048" watchObservedRunningTime="2025-07-14 21:47:29.574657368 +0000 UTC m=+32.260341529" Jul 14 21:47:29.581090 containerd[1439]: 2025-07-14 21:47:29.430 [INFO][3804] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" Jul 14 21:47:29.581090 containerd[1439]: 2025-07-14 21:47:29.432 [INFO][3804] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" iface="eth0" netns="/var/run/netns/cni-fc16f81e-a47d-1fd9-b338-92e2b38a38fb" Jul 14 21:47:29.581090 containerd[1439]: 2025-07-14 21:47:29.433 [INFO][3804] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" iface="eth0" netns="/var/run/netns/cni-fc16f81e-a47d-1fd9-b338-92e2b38a38fb" Jul 14 21:47:29.581090 containerd[1439]: 2025-07-14 21:47:29.433 [INFO][3804] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" iface="eth0" netns="/var/run/netns/cni-fc16f81e-a47d-1fd9-b338-92e2b38a38fb" Jul 14 21:47:29.581090 containerd[1439]: 2025-07-14 21:47:29.433 [INFO][3804] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" Jul 14 21:47:29.581090 containerd[1439]: 2025-07-14 21:47:29.434 [INFO][3804] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" Jul 14 21:47:29.581090 containerd[1439]: 2025-07-14 21:47:29.554 [INFO][3814] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" HandleID="k8s-pod-network.288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" Workload="localhost-k8s-whisker--74b74ffbd9--p9kqm-eth0" Jul 14 21:47:29.581090 containerd[1439]: 2025-07-14 21:47:29.554 [INFO][3814] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:47:29.581090 containerd[1439]: 2025-07-14 21:47:29.555 [INFO][3814] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:47:29.581090 containerd[1439]: 2025-07-14 21:47:29.570 [WARNING][3814] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" HandleID="k8s-pod-network.288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" Workload="localhost-k8s-whisker--74b74ffbd9--p9kqm-eth0" Jul 14 21:47:29.581090 containerd[1439]: 2025-07-14 21:47:29.570 [INFO][3814] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" HandleID="k8s-pod-network.288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" Workload="localhost-k8s-whisker--74b74ffbd9--p9kqm-eth0" Jul 14 21:47:29.581090 containerd[1439]: 2025-07-14 21:47:29.573 [INFO][3814] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:47:29.581090 containerd[1439]: 2025-07-14 21:47:29.578 [INFO][3804] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" Jul 14 21:47:29.581693 containerd[1439]: time="2025-07-14T21:47:29.581544982Z" level=info msg="TearDown network for sandbox \"288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820\" successfully" Jul 14 21:47:29.581726 containerd[1439]: time="2025-07-14T21:47:29.581695223Z" level=info msg="StopPodSandbox for \"288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820\" returns successfully" Jul 14 21:47:29.655376 kubelet[2465]: I0714 21:47:29.655008 2465 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwhwn\" (UniqueName: \"kubernetes.io/projected/e89124a2-60e5-49d1-b652-c0b685dfb2cc-kube-api-access-qwhwn\") pod \"e89124a2-60e5-49d1-b652-c0b685dfb2cc\" (UID: \"e89124a2-60e5-49d1-b652-c0b685dfb2cc\") " Jul 14 21:47:29.655376 kubelet[2465]: I0714 21:47:29.655071 2465 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e89124a2-60e5-49d1-b652-c0b685dfb2cc-whisker-ca-bundle\") pod \"e89124a2-60e5-49d1-b652-c0b685dfb2cc\" (UID: \"e89124a2-60e5-49d1-b652-c0b685dfb2cc\") " Jul 14 21:47:29.655376 kubelet[2465]: I0714 21:47:29.655103 2465 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e89124a2-60e5-49d1-b652-c0b685dfb2cc-whisker-backend-key-pair\") pod \"e89124a2-60e5-49d1-b652-c0b685dfb2cc\" (UID: \"e89124a2-60e5-49d1-b652-c0b685dfb2cc\") " Jul 14 21:47:29.665003 kubelet[2465]: I0714 21:47:29.664950 2465 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e89124a2-60e5-49d1-b652-c0b685dfb2cc-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e89124a2-60e5-49d1-b652-c0b685dfb2cc" (UID: "e89124a2-60e5-49d1-b652-c0b685dfb2cc"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 14 21:47:29.665698 kubelet[2465]: I0714 21:47:29.665344 2465 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e89124a2-60e5-49d1-b652-c0b685dfb2cc-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e89124a2-60e5-49d1-b652-c0b685dfb2cc" (UID: "e89124a2-60e5-49d1-b652-c0b685dfb2cc"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 14 21:47:29.665905 kubelet[2465]: I0714 21:47:29.665863 2465 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e89124a2-60e5-49d1-b652-c0b685dfb2cc-kube-api-access-qwhwn" (OuterVolumeSpecName: "kube-api-access-qwhwn") pod "e89124a2-60e5-49d1-b652-c0b685dfb2cc" (UID: "e89124a2-60e5-49d1-b652-c0b685dfb2cc"). InnerVolumeSpecName "kube-api-access-qwhwn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 21:47:29.740257 systemd[1]: run-netns-cni\x2dfc16f81e\x2da47d\x2d1fd9\x2db338\x2d92e2b38a38fb.mount: Deactivated successfully. Jul 14 21:47:29.740356 systemd[1]: var-lib-kubelet-pods-e89124a2\x2d60e5\x2d49d1\x2db652\x2dc0b685dfb2cc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqwhwn.mount: Deactivated successfully. Jul 14 21:47:29.740415 systemd[1]: var-lib-kubelet-pods-e89124a2\x2d60e5\x2d49d1\x2db652\x2dc0b685dfb2cc-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 14 21:47:29.756242 kubelet[2465]: I0714 21:47:29.756187 2465 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qwhwn\" (UniqueName: \"kubernetes.io/projected/e89124a2-60e5-49d1-b652-c0b685dfb2cc-kube-api-access-qwhwn\") on node \"localhost\" DevicePath \"\"" Jul 14 21:47:29.756242 kubelet[2465]: I0714 21:47:29.756226 2465 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e89124a2-60e5-49d1-b652-c0b685dfb2cc-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 14 21:47:29.756242 kubelet[2465]: I0714 21:47:29.756235 2465 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e89124a2-60e5-49d1-b652-c0b685dfb2cc-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 14 21:47:30.559886 kubelet[2465]: I0714 21:47:30.559803 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 21:47:30.574385 systemd[1]: Removed slice kubepods-besteffort-pode89124a2_60e5_49d1_b652_c0b685dfb2cc.slice - libcontainer container kubepods-besteffort-pode89124a2_60e5_49d1_b652_c0b685dfb2cc.slice. Jul 14 21:47:30.657413 systemd[1]: Created slice kubepods-besteffort-podcb312ec7_afb3_4fc8_bee4_dfb3019d153b.slice - libcontainer container kubepods-besteffort-podcb312ec7_afb3_4fc8_bee4_dfb3019d153b.slice. Jul 14 21:47:30.663524 kubelet[2465]: I0714 21:47:30.663020 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb312ec7-afb3-4fc8-bee4-dfb3019d153b-whisker-ca-bundle\") pod \"whisker-77c8c97779-tdjx6\" (UID: \"cb312ec7-afb3-4fc8-bee4-dfb3019d153b\") " pod="calico-system/whisker-77c8c97779-tdjx6" Jul 14 21:47:30.663524 kubelet[2465]: I0714 21:47:30.663358 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9sbg\" (UniqueName: \"kubernetes.io/projected/cb312ec7-afb3-4fc8-bee4-dfb3019d153b-kube-api-access-s9sbg\") pod \"whisker-77c8c97779-tdjx6\" (UID: \"cb312ec7-afb3-4fc8-bee4-dfb3019d153b\") " pod="calico-system/whisker-77c8c97779-tdjx6" Jul 14 21:47:30.663524 kubelet[2465]: I0714 21:47:30.663470 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cb312ec7-afb3-4fc8-bee4-dfb3019d153b-whisker-backend-key-pair\") pod \"whisker-77c8c97779-tdjx6\" (UID: \"cb312ec7-afb3-4fc8-bee4-dfb3019d153b\") " pod="calico-system/whisker-77c8c97779-tdjx6" Jul 14 21:47:30.962953 containerd[1439]: time="2025-07-14T21:47:30.962901391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77c8c97779-tdjx6,Uid:cb312ec7-afb3-4fc8-bee4-dfb3019d153b,Namespace:calico-system,Attempt:0,}" Jul 14 21:47:31.087481 systemd-networkd[1364]: calif616ef11478: Link UP Jul 14 21:47:31.088351 systemd-networkd[1364]: calif616ef11478: Gained carrier Jul 14 21:47:31.108333 containerd[1439]: 2025-07-14 21:47:30.996 [INFO][3941] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 21:47:31.108333 containerd[1439]: 2025-07-14 21:47:31.010 [INFO][3941] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--77c8c97779--tdjx6-eth0 whisker-77c8c97779- calico-system cb312ec7-afb3-4fc8-bee4-dfb3019d153b 911 0 2025-07-14 21:47:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:77c8c97779 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-77c8c97779-tdjx6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif616ef11478 [] [] }} ContainerID="c01a778914ea44c154969771754feb73db14f103a326fc5368bfda8df1d376da" Namespace="calico-system" Pod="whisker-77c8c97779-tdjx6" WorkloadEndpoint="localhost-k8s-whisker--77c8c97779--tdjx6-" Jul 14 21:47:31.108333 containerd[1439]: 2025-07-14 21:47:31.011 [INFO][3941] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c01a778914ea44c154969771754feb73db14f103a326fc5368bfda8df1d376da" Namespace="calico-system" Pod="whisker-77c8c97779-tdjx6" WorkloadEndpoint="localhost-k8s-whisker--77c8c97779--tdjx6-eth0" Jul 14 21:47:31.108333 containerd[1439]: 2025-07-14 21:47:31.034 [INFO][3951] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c01a778914ea44c154969771754feb73db14f103a326fc5368bfda8df1d376da" HandleID="k8s-pod-network.c01a778914ea44c154969771754feb73db14f103a326fc5368bfda8df1d376da" Workload="localhost-k8s-whisker--77c8c97779--tdjx6-eth0" Jul 14 21:47:31.108333 containerd[1439]: 2025-07-14 21:47:31.034 [INFO][3951] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c01a778914ea44c154969771754feb73db14f103a326fc5368bfda8df1d376da" HandleID="k8s-pod-network.c01a778914ea44c154969771754feb73db14f103a326fc5368bfda8df1d376da" Workload="localhost-k8s-whisker--77c8c97779--tdjx6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd090), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-77c8c97779-tdjx6", "timestamp":"2025-07-14 21:47:31.034013207 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 21:47:31.108333 containerd[1439]: 2025-07-14 21:47:31.034 [INFO][3951] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:47:31.108333 containerd[1439]: 2025-07-14 21:47:31.034 [INFO][3951] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:47:31.108333 containerd[1439]: 2025-07-14 21:47:31.034 [INFO][3951] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 21:47:31.108333 containerd[1439]: 2025-07-14 21:47:31.045 [INFO][3951] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c01a778914ea44c154969771754feb73db14f103a326fc5368bfda8df1d376da" host="localhost" Jul 14 21:47:31.108333 containerd[1439]: 2025-07-14 21:47:31.059 [INFO][3951] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 21:47:31.108333 containerd[1439]: 2025-07-14 21:47:31.063 [INFO][3951] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 21:47:31.108333 containerd[1439]: 2025-07-14 21:47:31.065 [INFO][3951] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 21:47:31.108333 containerd[1439]: 2025-07-14 21:47:31.067 [INFO][3951] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 21:47:31.108333 containerd[1439]: 2025-07-14 21:47:31.067 [INFO][3951] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c01a778914ea44c154969771754feb73db14f103a326fc5368bfda8df1d376da" host="localhost" Jul 14 21:47:31.108333 containerd[1439]: 2025-07-14 21:47:31.068 [INFO][3951] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c01a778914ea44c154969771754feb73db14f103a326fc5368bfda8df1d376da Jul 14 21:47:31.108333 containerd[1439]: 2025-07-14 21:47:31.072 [INFO][3951] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c01a778914ea44c154969771754feb73db14f103a326fc5368bfda8df1d376da" host="localhost" Jul 14 21:47:31.108333 containerd[1439]: 2025-07-14 21:47:31.076 [INFO][3951] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.c01a778914ea44c154969771754feb73db14f103a326fc5368bfda8df1d376da" host="localhost" Jul 14 21:47:31.108333 containerd[1439]: 2025-07-14 21:47:31.076 [INFO][3951] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.c01a778914ea44c154969771754feb73db14f103a326fc5368bfda8df1d376da" host="localhost" Jul 14 21:47:31.108333 containerd[1439]: 2025-07-14 21:47:31.076 [INFO][3951] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:47:31.108333 containerd[1439]: 2025-07-14 21:47:31.076 [INFO][3951] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="c01a778914ea44c154969771754feb73db14f103a326fc5368bfda8df1d376da" HandleID="k8s-pod-network.c01a778914ea44c154969771754feb73db14f103a326fc5368bfda8df1d376da" Workload="localhost-k8s-whisker--77c8c97779--tdjx6-eth0" Jul 14 21:47:31.108982 containerd[1439]: 2025-07-14 21:47:31.078 [INFO][3941] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c01a778914ea44c154969771754feb73db14f103a326fc5368bfda8df1d376da" Namespace="calico-system" Pod="whisker-77c8c97779-tdjx6" WorkloadEndpoint="localhost-k8s-whisker--77c8c97779--tdjx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--77c8c97779--tdjx6-eth0", GenerateName:"whisker-77c8c97779-", Namespace:"calico-system", SelfLink:"", UID:"cb312ec7-afb3-4fc8-bee4-dfb3019d153b", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 47, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77c8c97779", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-77c8c97779-tdjx6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif616ef11478", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:47:31.108982 containerd[1439]: 2025-07-14 21:47:31.078 [INFO][3941] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="c01a778914ea44c154969771754feb73db14f103a326fc5368bfda8df1d376da" Namespace="calico-system" Pod="whisker-77c8c97779-tdjx6" WorkloadEndpoint="localhost-k8s-whisker--77c8c97779--tdjx6-eth0" Jul 14 21:47:31.108982 containerd[1439]: 2025-07-14 21:47:31.078 [INFO][3941] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif616ef11478 ContainerID="c01a778914ea44c154969771754feb73db14f103a326fc5368bfda8df1d376da" Namespace="calico-system" Pod="whisker-77c8c97779-tdjx6" WorkloadEndpoint="localhost-k8s-whisker--77c8c97779--tdjx6-eth0" Jul 14 21:47:31.108982 containerd[1439]: 2025-07-14 21:47:31.092 [INFO][3941] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c01a778914ea44c154969771754feb73db14f103a326fc5368bfda8df1d376da" Namespace="calico-system" Pod="whisker-77c8c97779-tdjx6" WorkloadEndpoint="localhost-k8s-whisker--77c8c97779--tdjx6-eth0" Jul 14 21:47:31.108982 containerd[1439]: 2025-07-14 21:47:31.092 [INFO][3941] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c01a778914ea44c154969771754feb73db14f103a326fc5368bfda8df1d376da" Namespace="calico-system" Pod="whisker-77c8c97779-tdjx6" WorkloadEndpoint="localhost-k8s-whisker--77c8c97779--tdjx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--77c8c97779--tdjx6-eth0", GenerateName:"whisker-77c8c97779-", Namespace:"calico-system", SelfLink:"", UID:"cb312ec7-afb3-4fc8-bee4-dfb3019d153b", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 47, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77c8c97779", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c01a778914ea44c154969771754feb73db14f103a326fc5368bfda8df1d376da", Pod:"whisker-77c8c97779-tdjx6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif616ef11478", MAC:"ae:d7:7f:6f:ab:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:47:31.108982 containerd[1439]: 2025-07-14 21:47:31.105 [INFO][3941] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c01a778914ea44c154969771754feb73db14f103a326fc5368bfda8df1d376da" Namespace="calico-system" Pod="whisker-77c8c97779-tdjx6" WorkloadEndpoint="localhost-k8s-whisker--77c8c97779--tdjx6-eth0" Jul 14 21:47:31.123011 containerd[1439]: time="2025-07-14T21:47:31.122399375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:47:31.123011 containerd[1439]: time="2025-07-14T21:47:31.122850295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:47:31.123011 containerd[1439]: time="2025-07-14T21:47:31.122877095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:47:31.123011 containerd[1439]: time="2025-07-14T21:47:31.122962376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:47:31.144706 systemd[1]: Started cri-containerd-c01a778914ea44c154969771754feb73db14f103a326fc5368bfda8df1d376da.scope - libcontainer container c01a778914ea44c154969771754feb73db14f103a326fc5368bfda8df1d376da. Jul 14 21:47:31.155184 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:47:31.177580 containerd[1439]: time="2025-07-14T21:47:31.177246358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77c8c97779-tdjx6,Uid:cb312ec7-afb3-4fc8-bee4-dfb3019d153b,Namespace:calico-system,Attempt:0,} returns sandbox id \"c01a778914ea44c154969771754feb73db14f103a326fc5368bfda8df1d376da\"" Jul 14 21:47:31.179028 containerd[1439]: time="2025-07-14T21:47:31.178716641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 14 21:47:31.419875 kubelet[2465]: I0714 21:47:31.419694 2465 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e89124a2-60e5-49d1-b652-c0b685dfb2cc" path="/var/lib/kubelet/pods/e89124a2-60e5-49d1-b652-c0b685dfb2cc/volumes" Jul 14 21:47:32.296842 containerd[1439]: time="2025-07-14T21:47:32.296793018Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:32.297751 containerd[1439]: time="2025-07-14T21:47:32.297548859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 14 21:47:32.298505 containerd[1439]: time="2025-07-14T21:47:32.298467901Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:32.300612 containerd[1439]: time="2025-07-14T21:47:32.300581265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:32.302142 containerd[1439]: time="2025-07-14T21:47:32.302105667Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.123351266s" Jul 14 21:47:32.302142 containerd[1439]: time="2025-07-14T21:47:32.302138867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 14 21:47:32.306068 containerd[1439]: time="2025-07-14T21:47:32.306020315Z" level=info msg="CreateContainer within sandbox \"c01a778914ea44c154969771754feb73db14f103a326fc5368bfda8df1d376da\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 14 21:47:32.318229 containerd[1439]: time="2025-07-14T21:47:32.318184737Z" level=info msg="CreateContainer within sandbox \"c01a778914ea44c154969771754feb73db14f103a326fc5368bfda8df1d376da\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"9cc6716cb654fd01fe7c7b82b780c48d1e42b3c0f13eff481be0ec9fa6c59c92\"" Jul 14 21:47:32.318770 containerd[1439]: time="2025-07-14T21:47:32.318661418Z" level=info msg="StartContainer for \"9cc6716cb654fd01fe7c7b82b780c48d1e42b3c0f13eff481be0ec9fa6c59c92\"" Jul 14 21:47:32.345635 systemd[1]: Started cri-containerd-9cc6716cb654fd01fe7c7b82b780c48d1e42b3c0f13eff481be0ec9fa6c59c92.scope - libcontainer container 9cc6716cb654fd01fe7c7b82b780c48d1e42b3c0f13eff481be0ec9fa6c59c92. Jul 14 21:47:32.378493 containerd[1439]: time="2025-07-14T21:47:32.378417847Z" level=info msg="StartContainer for \"9cc6716cb654fd01fe7c7b82b780c48d1e42b3c0f13eff481be0ec9fa6c59c92\" returns successfully" Jul 14 21:47:32.379995 containerd[1439]: time="2025-07-14T21:47:32.379788810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 14 21:47:32.554619 systemd-networkd[1364]: calif616ef11478: Gained IPv6LL Jul 14 21:47:34.104169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3814610621.mount: Deactivated successfully. Jul 14 21:47:34.138458 containerd[1439]: time="2025-07-14T21:47:34.138398200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:34.139380 containerd[1439]: time="2025-07-14T21:47:34.139178001Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 14 21:47:34.140112 containerd[1439]: time="2025-07-14T21:47:34.140069563Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:34.142356 containerd[1439]: time="2025-07-14T21:47:34.142325247Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:34.143247 containerd[1439]: time="2025-07-14T21:47:34.143212528Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.763388878s" Jul 14 21:47:34.143247 containerd[1439]: time="2025-07-14T21:47:34.143246568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 14 21:47:34.147402 containerd[1439]: time="2025-07-14T21:47:34.147367775Z" level=info msg="CreateContainer within sandbox \"c01a778914ea44c154969771754feb73db14f103a326fc5368bfda8df1d376da\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 14 21:47:34.157446 containerd[1439]: time="2025-07-14T21:47:34.157397953Z" level=info msg="CreateContainer within sandbox \"c01a778914ea44c154969771754feb73db14f103a326fc5368bfda8df1d376da\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"e39ca12db84a78b0f65010c4d878eb780a5f5cf92816032f18ba597d1560bfad\"" Jul 14 21:47:34.158165 containerd[1439]: time="2025-07-14T21:47:34.158114594Z" level=info msg="StartContainer for \"e39ca12db84a78b0f65010c4d878eb780a5f5cf92816032f18ba597d1560bfad\"" Jul 14 21:47:34.205674 systemd[1]: Started cri-containerd-e39ca12db84a78b0f65010c4d878eb780a5f5cf92816032f18ba597d1560bfad.scope - libcontainer container e39ca12db84a78b0f65010c4d878eb780a5f5cf92816032f18ba597d1560bfad. Jul 14 21:47:34.239644 containerd[1439]: time="2025-07-14T21:47:34.239595454Z" level=info msg="StartContainer for \"e39ca12db84a78b0f65010c4d878eb780a5f5cf92816032f18ba597d1560bfad\" returns successfully" Jul 14 21:47:34.586157 kubelet[2465]: I0714 21:47:34.585899 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-77c8c97779-tdjx6" podStartSLOduration=1.6202674419999998 podStartE2EDuration="4.585863611s" podCreationTimestamp="2025-07-14 21:47:30 +0000 UTC" firstStartedPulling="2025-07-14 21:47:31.178487801 +0000 UTC m=+33.864171922" lastFinishedPulling="2025-07-14 21:47:34.14408397 +0000 UTC m=+36.829768091" observedRunningTime="2025-07-14 21:47:34.584120448 +0000 UTC m=+37.269804609" watchObservedRunningTime="2025-07-14 21:47:34.585863611 +0000 UTC m=+37.271547772" Jul 14 21:47:36.417396 containerd[1439]: time="2025-07-14T21:47:36.417353159Z" level=info msg="StopPodSandbox for \"f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64\"" Jul 14 21:47:36.543080 containerd[1439]: 2025-07-14 21:47:36.501 [INFO][4234] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" Jul 14 21:47:36.543080 containerd[1439]: 2025-07-14 21:47:36.502 [INFO][4234] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" iface="eth0" netns="/var/run/netns/cni-fa051c2a-8d22-9f69-3ae9-4eb5cdab6a97" Jul 14 21:47:36.543080 containerd[1439]: 2025-07-14 21:47:36.502 [INFO][4234] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" iface="eth0" netns="/var/run/netns/cni-fa051c2a-8d22-9f69-3ae9-4eb5cdab6a97" Jul 14 21:47:36.543080 containerd[1439]: 2025-07-14 21:47:36.504 [INFO][4234] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" iface="eth0" netns="/var/run/netns/cni-fa051c2a-8d22-9f69-3ae9-4eb5cdab6a97" Jul 14 21:47:36.543080 containerd[1439]: 2025-07-14 21:47:36.504 [INFO][4234] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" Jul 14 21:47:36.543080 containerd[1439]: 2025-07-14 21:47:36.504 [INFO][4234] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" Jul 14 21:47:36.543080 containerd[1439]: 2025-07-14 21:47:36.524 [INFO][4243] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" HandleID="k8s-pod-network.f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" Workload="localhost-k8s-calico--apiserver--64779d767d--8dt8r-eth0" Jul 14 21:47:36.543080 containerd[1439]: 2025-07-14 21:47:36.524 [INFO][4243] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:47:36.543080 containerd[1439]: 2025-07-14 21:47:36.524 [INFO][4243] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:47:36.543080 containerd[1439]: 2025-07-14 21:47:36.535 [WARNING][4243] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" HandleID="k8s-pod-network.f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" Workload="localhost-k8s-calico--apiserver--64779d767d--8dt8r-eth0" Jul 14 21:47:36.543080 containerd[1439]: 2025-07-14 21:47:36.535 [INFO][4243] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" HandleID="k8s-pod-network.f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" Workload="localhost-k8s-calico--apiserver--64779d767d--8dt8r-eth0" Jul 14 21:47:36.543080 containerd[1439]: 2025-07-14 21:47:36.537 [INFO][4243] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:47:36.543080 containerd[1439]: 2025-07-14 21:47:36.540 [INFO][4234] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" Jul 14 21:47:36.543499 containerd[1439]: time="2025-07-14T21:47:36.543219684Z" level=info msg="TearDown network for sandbox \"f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64\" successfully" Jul 14 21:47:36.543499 containerd[1439]: time="2025-07-14T21:47:36.543245764Z" level=info msg="StopPodSandbox for \"f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64\" returns successfully" Jul 14 21:47:36.546080 containerd[1439]: time="2025-07-14T21:47:36.545070607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64779d767d-8dt8r,Uid:63909ecb-5ac2-4278-909f-4d78ae798ccd,Namespace:calico-apiserver,Attempt:1,}" Jul 14 21:47:36.545431 systemd[1]: run-netns-cni\x2dfa051c2a\x2d8d22\x2d9f69\x2d3ae9\x2d4eb5cdab6a97.mount: Deactivated successfully. Jul 14 21:47:36.664212 kubelet[2465]: I0714 21:47:36.664159 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 21:47:36.807281 systemd-networkd[1364]: cali8db3f3b1613: Link UP Jul 14 21:47:36.808124 systemd-networkd[1364]: cali8db3f3b1613: Gained carrier Jul 14 21:47:36.833139 containerd[1439]: 2025-07-14 21:47:36.722 [INFO][4256] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 21:47:36.833139 containerd[1439]: 2025-07-14 21:47:36.740 [INFO][4256] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--64779d767d--8dt8r-eth0 calico-apiserver-64779d767d- calico-apiserver 63909ecb-5ac2-4278-909f-4d78ae798ccd 944 0 2025-07-14 21:47:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64779d767d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-64779d767d-8dt8r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8db3f3b1613 [] [] }} ContainerID="08223b2060266e6613ea47503f4f7668468b42f8349b44fca5220c8cd9bbed9c" Namespace="calico-apiserver" Pod="calico-apiserver-64779d767d-8dt8r" WorkloadEndpoint="localhost-k8s-calico--apiserver--64779d767d--8dt8r-" Jul 14 21:47:36.833139 containerd[1439]: 2025-07-14 21:47:36.740 [INFO][4256] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="08223b2060266e6613ea47503f4f7668468b42f8349b44fca5220c8cd9bbed9c" Namespace="calico-apiserver" Pod="calico-apiserver-64779d767d-8dt8r" WorkloadEndpoint="localhost-k8s-calico--apiserver--64779d767d--8dt8r-eth0" Jul 14 21:47:36.833139 containerd[1439]: 2025-07-14 21:47:36.765 [INFO][4282] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="08223b2060266e6613ea47503f4f7668468b42f8349b44fca5220c8cd9bbed9c" HandleID="k8s-pod-network.08223b2060266e6613ea47503f4f7668468b42f8349b44fca5220c8cd9bbed9c" Workload="localhost-k8s-calico--apiserver--64779d767d--8dt8r-eth0" Jul 14 21:47:36.833139 containerd[1439]: 2025-07-14 21:47:36.766 [INFO][4282] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="08223b2060266e6613ea47503f4f7668468b42f8349b44fca5220c8cd9bbed9c" HandleID="k8s-pod-network.08223b2060266e6613ea47503f4f7668468b42f8349b44fca5220c8cd9bbed9c" Workload="localhost-k8s-calico--apiserver--64779d767d--8dt8r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137670), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-64779d767d-8dt8r", "timestamp":"2025-07-14 21:47:36.765866046 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 21:47:36.833139 containerd[1439]: 2025-07-14 21:47:36.766 [INFO][4282] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:47:36.833139 containerd[1439]: 2025-07-14 21:47:36.766 [INFO][4282] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:47:36.833139 containerd[1439]: 2025-07-14 21:47:36.766 [INFO][4282] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 21:47:36.833139 containerd[1439]: 2025-07-14 21:47:36.777 [INFO][4282] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.08223b2060266e6613ea47503f4f7668468b42f8349b44fca5220c8cd9bbed9c" host="localhost" Jul 14 21:47:36.833139 containerd[1439]: 2025-07-14 21:47:36.781 [INFO][4282] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 21:47:36.833139 containerd[1439]: 2025-07-14 21:47:36.787 [INFO][4282] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 21:47:36.833139 containerd[1439]: 2025-07-14 21:47:36.789 [INFO][4282] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 21:47:36.833139 containerd[1439]: 2025-07-14 21:47:36.791 [INFO][4282] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 21:47:36.833139 containerd[1439]: 2025-07-14 21:47:36.791 [INFO][4282] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.08223b2060266e6613ea47503f4f7668468b42f8349b44fca5220c8cd9bbed9c" host="localhost" Jul 14 21:47:36.833139 containerd[1439]: 2025-07-14 21:47:36.793 [INFO][4282] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.08223b2060266e6613ea47503f4f7668468b42f8349b44fca5220c8cd9bbed9c Jul 14 21:47:36.833139 containerd[1439]: 2025-07-14 21:47:36.796 [INFO][4282] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.08223b2060266e6613ea47503f4f7668468b42f8349b44fca5220c8cd9bbed9c" host="localhost" Jul 14 21:47:36.833139 containerd[1439]: 2025-07-14 21:47:36.802 [INFO][4282] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.08223b2060266e6613ea47503f4f7668468b42f8349b44fca5220c8cd9bbed9c" host="localhost" Jul 14 21:47:36.833139 containerd[1439]: 2025-07-14 21:47:36.802 [INFO][4282] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.08223b2060266e6613ea47503f4f7668468b42f8349b44fca5220c8cd9bbed9c" host="localhost" Jul 14 21:47:36.833139 containerd[1439]: 2025-07-14 21:47:36.802 [INFO][4282] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:47:36.833139 containerd[1439]: 2025-07-14 21:47:36.803 [INFO][4282] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="08223b2060266e6613ea47503f4f7668468b42f8349b44fca5220c8cd9bbed9c" HandleID="k8s-pod-network.08223b2060266e6613ea47503f4f7668468b42f8349b44fca5220c8cd9bbed9c" Workload="localhost-k8s-calico--apiserver--64779d767d--8dt8r-eth0" Jul 14 21:47:36.833735 containerd[1439]: 2025-07-14 21:47:36.805 [INFO][4256] cni-plugin/k8s.go 418: Populated endpoint ContainerID="08223b2060266e6613ea47503f4f7668468b42f8349b44fca5220c8cd9bbed9c" Namespace="calico-apiserver" Pod="calico-apiserver-64779d767d-8dt8r" WorkloadEndpoint="localhost-k8s-calico--apiserver--64779d767d--8dt8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64779d767d--8dt8r-eth0", GenerateName:"calico-apiserver-64779d767d-", Namespace:"calico-apiserver", SelfLink:"", UID:"63909ecb-5ac2-4278-909f-4d78ae798ccd", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 47, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64779d767d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-64779d767d-8dt8r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8db3f3b1613", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:47:36.833735 containerd[1439]: 2025-07-14 21:47:36.805 [INFO][4256] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="08223b2060266e6613ea47503f4f7668468b42f8349b44fca5220c8cd9bbed9c" Namespace="calico-apiserver" Pod="calico-apiserver-64779d767d-8dt8r" WorkloadEndpoint="localhost-k8s-calico--apiserver--64779d767d--8dt8r-eth0" Jul 14 21:47:36.833735 containerd[1439]: 2025-07-14 21:47:36.805 [INFO][4256] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8db3f3b1613 ContainerID="08223b2060266e6613ea47503f4f7668468b42f8349b44fca5220c8cd9bbed9c" Namespace="calico-apiserver" Pod="calico-apiserver-64779d767d-8dt8r" WorkloadEndpoint="localhost-k8s-calico--apiserver--64779d767d--8dt8r-eth0" Jul 14 21:47:36.833735 containerd[1439]: 2025-07-14 21:47:36.808 [INFO][4256] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="08223b2060266e6613ea47503f4f7668468b42f8349b44fca5220c8cd9bbed9c" Namespace="calico-apiserver" Pod="calico-apiserver-64779d767d-8dt8r" WorkloadEndpoint="localhost-k8s-calico--apiserver--64779d767d--8dt8r-eth0" Jul 14 21:47:36.833735 containerd[1439]: 2025-07-14 21:47:36.809 [INFO][4256] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="08223b2060266e6613ea47503f4f7668468b42f8349b44fca5220c8cd9bbed9c" Namespace="calico-apiserver" Pod="calico-apiserver-64779d767d-8dt8r" WorkloadEndpoint="localhost-k8s-calico--apiserver--64779d767d--8dt8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64779d767d--8dt8r-eth0", GenerateName:"calico-apiserver-64779d767d-", Namespace:"calico-apiserver", SelfLink:"", UID:"63909ecb-5ac2-4278-909f-4d78ae798ccd", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 47, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64779d767d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"08223b2060266e6613ea47503f4f7668468b42f8349b44fca5220c8cd9bbed9c", Pod:"calico-apiserver-64779d767d-8dt8r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8db3f3b1613", MAC:"4e:b5:43:2f:a7:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:47:36.833735 containerd[1439]: 2025-07-14 21:47:36.822 [INFO][4256] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="08223b2060266e6613ea47503f4f7668468b42f8349b44fca5220c8cd9bbed9c" Namespace="calico-apiserver" Pod="calico-apiserver-64779d767d-8dt8r" WorkloadEndpoint="localhost-k8s-calico--apiserver--64779d767d--8dt8r-eth0" Jul 14 21:47:36.878034 containerd[1439]: time="2025-07-14T21:47:36.877021027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:47:36.878034 containerd[1439]: time="2025-07-14T21:47:36.877101907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:47:36.878034 containerd[1439]: time="2025-07-14T21:47:36.877120947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:47:36.879341 containerd[1439]: time="2025-07-14T21:47:36.878698990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:47:36.907060 systemd[1]: Started cri-containerd-08223b2060266e6613ea47503f4f7668468b42f8349b44fca5220c8cd9bbed9c.scope - libcontainer container 08223b2060266e6613ea47503f4f7668468b42f8349b44fca5220c8cd9bbed9c. Jul 14 21:47:36.920395 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:47:36.939056 containerd[1439]: time="2025-07-14T21:47:36.939005528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64779d767d-8dt8r,Uid:63909ecb-5ac2-4278-909f-4d78ae798ccd,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"08223b2060266e6613ea47503f4f7668468b42f8349b44fca5220c8cd9bbed9c\"" Jul 14 21:47:36.941221 containerd[1439]: time="2025-07-14T21:47:36.941186892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 14 21:47:37.417770 containerd[1439]: time="2025-07-14T21:47:37.417686170Z" level=info msg="StopPodSandbox for \"4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468\"" Jul 14 21:47:37.418130 containerd[1439]: time="2025-07-14T21:47:37.417703730Z" level=info msg="StopPodSandbox for \"7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15\"" Jul 14 21:47:37.515587 containerd[1439]: 2025-07-14 21:47:37.471 [INFO][4418] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" Jul 14 21:47:37.515587 containerd[1439]: 2025-07-14 21:47:37.472 [INFO][4418] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" iface="eth0" netns="/var/run/netns/cni-92e14be9-915c-228b-f307-ad7748d5837a" Jul 14 21:47:37.515587 containerd[1439]: 2025-07-14 21:47:37.472 [INFO][4418] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" iface="eth0" netns="/var/run/netns/cni-92e14be9-915c-228b-f307-ad7748d5837a" Jul 14 21:47:37.515587 containerd[1439]: 2025-07-14 21:47:37.473 [INFO][4418] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" iface="eth0" netns="/var/run/netns/cni-92e14be9-915c-228b-f307-ad7748d5837a" Jul 14 21:47:37.515587 containerd[1439]: 2025-07-14 21:47:37.473 [INFO][4418] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" Jul 14 21:47:37.515587 containerd[1439]: 2025-07-14 21:47:37.473 [INFO][4418] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" Jul 14 21:47:37.515587 containerd[1439]: 2025-07-14 21:47:37.492 [INFO][4436] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" HandleID="k8s-pod-network.4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" Workload="localhost-k8s-csi--node--driver--bcc6m-eth0" Jul 14 21:47:37.515587 containerd[1439]: 2025-07-14 21:47:37.492 [INFO][4436] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:47:37.515587 containerd[1439]: 2025-07-14 21:47:37.492 [INFO][4436] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:47:37.515587 containerd[1439]: 2025-07-14 21:47:37.505 [WARNING][4436] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" HandleID="k8s-pod-network.4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" Workload="localhost-k8s-csi--node--driver--bcc6m-eth0" Jul 14 21:47:37.515587 containerd[1439]: 2025-07-14 21:47:37.505 [INFO][4436] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" HandleID="k8s-pod-network.4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" Workload="localhost-k8s-csi--node--driver--bcc6m-eth0" Jul 14 21:47:37.515587 containerd[1439]: 2025-07-14 21:47:37.510 [INFO][4436] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:47:37.515587 containerd[1439]: 2025-07-14 21:47:37.513 [INFO][4418] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" Jul 14 21:47:37.515587 containerd[1439]: time="2025-07-14T21:47:37.515459445Z" level=info msg="TearDown network for sandbox \"4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468\" successfully" Jul 14 21:47:37.515587 containerd[1439]: time="2025-07-14T21:47:37.515488845Z" level=info msg="StopPodSandbox for \"4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468\" returns successfully" Jul 14 21:47:37.519263 containerd[1439]: time="2025-07-14T21:47:37.518703330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bcc6m,Uid:2c011bf7-d865-42c4-a2c0-d53c4ee5f22f,Namespace:calico-system,Attempt:1,}" Jul 14 21:47:37.545638 containerd[1439]: 2025-07-14 21:47:37.465 [INFO][4409] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" Jul 14 21:47:37.545638 containerd[1439]: 2025-07-14 21:47:37.466 [INFO][4409] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" iface="eth0" netns="/var/run/netns/cni-929447aa-2397-49f9-71ea-18a08469f3e4" Jul 14 21:47:37.545638 containerd[1439]: 2025-07-14 21:47:37.466 [INFO][4409] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" iface="eth0" netns="/var/run/netns/cni-929447aa-2397-49f9-71ea-18a08469f3e4" Jul 14 21:47:37.545638 containerd[1439]: 2025-07-14 21:47:37.466 [INFO][4409] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" iface="eth0" netns="/var/run/netns/cni-929447aa-2397-49f9-71ea-18a08469f3e4" Jul 14 21:47:37.545638 containerd[1439]: 2025-07-14 21:47:37.466 [INFO][4409] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" Jul 14 21:47:37.545638 containerd[1439]: 2025-07-14 21:47:37.466 [INFO][4409] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" Jul 14 21:47:37.545638 containerd[1439]: 2025-07-14 21:47:37.496 [INFO][4430] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" HandleID="k8s-pod-network.7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" Workload="localhost-k8s-calico--apiserver--64779d767d--x5ktd-eth0" Jul 14 21:47:37.545638 containerd[1439]: 2025-07-14 21:47:37.497 [INFO][4430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:47:37.545638 containerd[1439]: 2025-07-14 21:47:37.511 [INFO][4430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:47:37.545638 containerd[1439]: 2025-07-14 21:47:37.521 [WARNING][4430] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" HandleID="k8s-pod-network.7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" Workload="localhost-k8s-calico--apiserver--64779d767d--x5ktd-eth0" Jul 14 21:47:37.545638 containerd[1439]: 2025-07-14 21:47:37.521 [INFO][4430] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" HandleID="k8s-pod-network.7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" Workload="localhost-k8s-calico--apiserver--64779d767d--x5ktd-eth0" Jul 14 21:47:37.545638 containerd[1439]: 2025-07-14 21:47:37.523 [INFO][4430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:47:37.545638 containerd[1439]: 2025-07-14 21:47:37.525 [INFO][4409] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" Jul 14 21:47:37.548379 systemd[1]: run-netns-cni\x2d92e14be9\x2d915c\x2d228b\x2df307\x2dad7748d5837a.mount: Deactivated successfully. Jul 14 21:47:37.552696 systemd[1]: run-netns-cni\x2d929447aa\x2d2397\x2d49f9\x2d71ea\x2d18a08469f3e4.mount: Deactivated successfully. Jul 14 21:47:37.553951 containerd[1439]: time="2025-07-14T21:47:37.553909946Z" level=info msg="TearDown network for sandbox \"7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15\" successfully" Jul 14 21:47:37.554050 containerd[1439]: time="2025-07-14T21:47:37.553953386Z" level=info msg="StopPodSandbox for \"7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15\" returns successfully" Jul 14 21:47:37.554959 containerd[1439]: time="2025-07-14T21:47:37.554930107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64779d767d-x5ktd,Uid:51cd5ebd-5963-4ce2-ab69-bebc4a3c6e81,Namespace:calico-apiserver,Attempt:1,}" Jul 14 21:47:37.587727 systemd[1]: Started sshd@7-10.0.0.35:22-10.0.0.1:48278.service - OpenSSH per-connection server daemon (10.0.0.1:48278). Jul 14 21:47:37.638662 sshd[4474]: Accepted publickey for core from 10.0.0.1 port 48278 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:47:37.642422 sshd[4474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:47:37.649421 systemd-logind[1419]: New session 8 of user core. Jul 14 21:47:37.658621 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 14 21:47:37.669608 systemd-networkd[1364]: cali519b3e148c2: Link UP Jul 14 21:47:37.669788 systemd-networkd[1364]: cali519b3e148c2: Gained carrier Jul 14 21:47:37.687557 containerd[1439]: 2025-07-14 21:47:37.567 [INFO][4447] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 21:47:37.687557 containerd[1439]: 2025-07-14 21:47:37.588 [INFO][4447] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--bcc6m-eth0 csi-node-driver- calico-system 2c011bf7-d865-42c4-a2c0-d53c4ee5f22f 988 0 2025-07-14 21:47:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-bcc6m eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali519b3e148c2 [] [] }} ContainerID="4fca8d6ccc2cfd36d018737dcaec8c58c2f8090b70b8a1d22cbdf517f1c2eb6a" Namespace="calico-system" Pod="csi-node-driver-bcc6m" WorkloadEndpoint="localhost-k8s-csi--node--driver--bcc6m-" Jul 14 21:47:37.687557 containerd[1439]: 2025-07-14 21:47:37.588 [INFO][4447] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4fca8d6ccc2cfd36d018737dcaec8c58c2f8090b70b8a1d22cbdf517f1c2eb6a" Namespace="calico-system" Pod="csi-node-driver-bcc6m" WorkloadEndpoint="localhost-k8s-csi--node--driver--bcc6m-eth0" Jul 14 21:47:37.687557 containerd[1439]: 2025-07-14 21:47:37.626 [INFO][4478] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4fca8d6ccc2cfd36d018737dcaec8c58c2f8090b70b8a1d22cbdf517f1c2eb6a" HandleID="k8s-pod-network.4fca8d6ccc2cfd36d018737dcaec8c58c2f8090b70b8a1d22cbdf517f1c2eb6a" Workload="localhost-k8s-csi--node--driver--bcc6m-eth0" Jul 14 21:47:37.687557 containerd[1439]: 2025-07-14 21:47:37.626 [INFO][4478] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4fca8d6ccc2cfd36d018737dcaec8c58c2f8090b70b8a1d22cbdf517f1c2eb6a" HandleID="k8s-pod-network.4fca8d6ccc2cfd36d018737dcaec8c58c2f8090b70b8a1d22cbdf517f1c2eb6a" Workload="localhost-k8s-csi--node--driver--bcc6m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000502b30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-bcc6m", "timestamp":"2025-07-14 21:47:37.62610202 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 21:47:37.687557 containerd[1439]: 2025-07-14 21:47:37.626 [INFO][4478] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:47:37.687557 containerd[1439]: 2025-07-14 21:47:37.626 [INFO][4478] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:47:37.687557 containerd[1439]: 2025-07-14 21:47:37.626 [INFO][4478] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 21:47:37.687557 containerd[1439]: 2025-07-14 21:47:37.637 [INFO][4478] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4fca8d6ccc2cfd36d018737dcaec8c58c2f8090b70b8a1d22cbdf517f1c2eb6a" host="localhost" Jul 14 21:47:37.687557 containerd[1439]: 2025-07-14 21:47:37.643 [INFO][4478] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 21:47:37.687557 containerd[1439]: 2025-07-14 21:47:37.648 [INFO][4478] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 21:47:37.687557 containerd[1439]: 2025-07-14 21:47:37.651 [INFO][4478] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 21:47:37.687557 containerd[1439]: 2025-07-14 21:47:37.653 [INFO][4478] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 21:47:37.687557 containerd[1439]: 2025-07-14 21:47:37.653 [INFO][4478] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4fca8d6ccc2cfd36d018737dcaec8c58c2f8090b70b8a1d22cbdf517f1c2eb6a" host="localhost" Jul 14 21:47:37.687557 containerd[1439]: 2025-07-14 21:47:37.654 [INFO][4478] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4fca8d6ccc2cfd36d018737dcaec8c58c2f8090b70b8a1d22cbdf517f1c2eb6a Jul 14 21:47:37.687557 containerd[1439]: 2025-07-14 21:47:37.657 [INFO][4478] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4fca8d6ccc2cfd36d018737dcaec8c58c2f8090b70b8a1d22cbdf517f1c2eb6a" host="localhost" Jul 14 21:47:37.687557 containerd[1439]: 2025-07-14 21:47:37.662 [INFO][4478] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.4fca8d6ccc2cfd36d018737dcaec8c58c2f8090b70b8a1d22cbdf517f1c2eb6a" host="localhost" Jul 14 21:47:37.687557 containerd[1439]: 2025-07-14 21:47:37.662 [INFO][4478] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.4fca8d6ccc2cfd36d018737dcaec8c58c2f8090b70b8a1d22cbdf517f1c2eb6a" host="localhost" Jul 14 21:47:37.687557 containerd[1439]: 2025-07-14 21:47:37.662 [INFO][4478] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:47:37.687557 containerd[1439]: 2025-07-14 21:47:37.662 [INFO][4478] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="4fca8d6ccc2cfd36d018737dcaec8c58c2f8090b70b8a1d22cbdf517f1c2eb6a" HandleID="k8s-pod-network.4fca8d6ccc2cfd36d018737dcaec8c58c2f8090b70b8a1d22cbdf517f1c2eb6a" Workload="localhost-k8s-csi--node--driver--bcc6m-eth0" Jul 14 21:47:37.688138 containerd[1439]: 2025-07-14 21:47:37.666 [INFO][4447] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4fca8d6ccc2cfd36d018737dcaec8c58c2f8090b70b8a1d22cbdf517f1c2eb6a" Namespace="calico-system" Pod="csi-node-driver-bcc6m" WorkloadEndpoint="localhost-k8s-csi--node--driver--bcc6m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bcc6m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2c011bf7-d865-42c4-a2c0-d53c4ee5f22f", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 47, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-bcc6m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali519b3e148c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:47:37.688138 containerd[1439]: 2025-07-14 21:47:37.666 [INFO][4447] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="4fca8d6ccc2cfd36d018737dcaec8c58c2f8090b70b8a1d22cbdf517f1c2eb6a" Namespace="calico-system" Pod="csi-node-driver-bcc6m" WorkloadEndpoint="localhost-k8s-csi--node--driver--bcc6m-eth0" Jul 14 21:47:37.688138 containerd[1439]: 2025-07-14 21:47:37.666 [INFO][4447] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali519b3e148c2 ContainerID="4fca8d6ccc2cfd36d018737dcaec8c58c2f8090b70b8a1d22cbdf517f1c2eb6a" Namespace="calico-system" Pod="csi-node-driver-bcc6m" WorkloadEndpoint="localhost-k8s-csi--node--driver--bcc6m-eth0" Jul 14 21:47:37.688138 containerd[1439]: 2025-07-14 21:47:37.668 [INFO][4447] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4fca8d6ccc2cfd36d018737dcaec8c58c2f8090b70b8a1d22cbdf517f1c2eb6a" Namespace="calico-system" Pod="csi-node-driver-bcc6m" WorkloadEndpoint="localhost-k8s-csi--node--driver--bcc6m-eth0" Jul 14 21:47:37.688138 containerd[1439]: 2025-07-14 21:47:37.669 [INFO][4447] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4fca8d6ccc2cfd36d018737dcaec8c58c2f8090b70b8a1d22cbdf517f1c2eb6a" Namespace="calico-system" Pod="csi-node-driver-bcc6m" WorkloadEndpoint="localhost-k8s-csi--node--driver--bcc6m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bcc6m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2c011bf7-d865-42c4-a2c0-d53c4ee5f22f", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 47, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4fca8d6ccc2cfd36d018737dcaec8c58c2f8090b70b8a1d22cbdf517f1c2eb6a", Pod:"csi-node-driver-bcc6m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali519b3e148c2", MAC:"1a:ff:66:48:7b:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:47:37.688138 containerd[1439]: 2025-07-14 21:47:37.685 [INFO][4447] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4fca8d6ccc2cfd36d018737dcaec8c58c2f8090b70b8a1d22cbdf517f1c2eb6a" Namespace="calico-system" Pod="csi-node-driver-bcc6m" WorkloadEndpoint="localhost-k8s-csi--node--driver--bcc6m-eth0" Jul 14 21:47:37.700383 containerd[1439]: time="2025-07-14T21:47:37.700285058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:47:37.700383 containerd[1439]: time="2025-07-14T21:47:37.700346018Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:47:37.700383 containerd[1439]: time="2025-07-14T21:47:37.700360978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:47:37.700603 containerd[1439]: time="2025-07-14T21:47:37.700449578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:47:37.724636 systemd[1]: Started cri-containerd-4fca8d6ccc2cfd36d018737dcaec8c58c2f8090b70b8a1d22cbdf517f1c2eb6a.scope - libcontainer container 4fca8d6ccc2cfd36d018737dcaec8c58c2f8090b70b8a1d22cbdf517f1c2eb6a. Jul 14 21:47:37.733701 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:47:37.744911 containerd[1439]: time="2025-07-14T21:47:37.744874288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bcc6m,Uid:2c011bf7-d865-42c4-a2c0-d53c4ee5f22f,Namespace:calico-system,Attempt:1,} returns sandbox id \"4fca8d6ccc2cfd36d018737dcaec8c58c2f8090b70b8a1d22cbdf517f1c2eb6a\"" Jul 14 21:47:37.770798 systemd-networkd[1364]: cali3521174f397: Link UP Jul 14 21:47:37.771536 systemd-networkd[1364]: cali3521174f397: Gained carrier Jul 14 21:47:37.788914 containerd[1439]: 2025-07-14 21:47:37.600 [INFO][4463] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 21:47:37.788914 containerd[1439]: 2025-07-14 21:47:37.629 [INFO][4463] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--64779d767d--x5ktd-eth0 calico-apiserver-64779d767d- calico-apiserver 51cd5ebd-5963-4ce2-ab69-bebc4a3c6e81 987 0 2025-07-14 21:47:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64779d767d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-64779d767d-x5ktd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3521174f397 [] [] }} ContainerID="960ea8da0cd7c4a1ee9fe66226b04e48d26fb6e2ea1c17de147791db7307113e" Namespace="calico-apiserver" Pod="calico-apiserver-64779d767d-x5ktd" WorkloadEndpoint="localhost-k8s-calico--apiserver--64779d767d--x5ktd-" Jul 14 21:47:37.788914 containerd[1439]: 2025-07-14 21:47:37.629 [INFO][4463] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="960ea8da0cd7c4a1ee9fe66226b04e48d26fb6e2ea1c17de147791db7307113e" Namespace="calico-apiserver" Pod="calico-apiserver-64779d767d-x5ktd" WorkloadEndpoint="localhost-k8s-calico--apiserver--64779d767d--x5ktd-eth0" Jul 14 21:47:37.788914 containerd[1439]: 2025-07-14 21:47:37.665 [INFO][4491] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="960ea8da0cd7c4a1ee9fe66226b04e48d26fb6e2ea1c17de147791db7307113e" HandleID="k8s-pod-network.960ea8da0cd7c4a1ee9fe66226b04e48d26fb6e2ea1c17de147791db7307113e" Workload="localhost-k8s-calico--apiserver--64779d767d--x5ktd-eth0" Jul 14 21:47:37.788914 containerd[1439]: 2025-07-14 21:47:37.665 [INFO][4491] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="960ea8da0cd7c4a1ee9fe66226b04e48d26fb6e2ea1c17de147791db7307113e" HandleID="k8s-pod-network.960ea8da0cd7c4a1ee9fe66226b04e48d26fb6e2ea1c17de147791db7307113e" Workload="localhost-k8s-calico--apiserver--64779d767d--x5ktd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c38f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-64779d767d-x5ktd", "timestamp":"2025-07-14 21:47:37.665015962 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 21:47:37.788914 containerd[1439]: 2025-07-14 21:47:37.665 [INFO][4491] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:47:37.788914 containerd[1439]: 2025-07-14 21:47:37.665 [INFO][4491] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:47:37.788914 containerd[1439]: 2025-07-14 21:47:37.665 [INFO][4491] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 21:47:37.788914 containerd[1439]: 2025-07-14 21:47:37.739 [INFO][4491] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.960ea8da0cd7c4a1ee9fe66226b04e48d26fb6e2ea1c17de147791db7307113e" host="localhost" Jul 14 21:47:37.788914 containerd[1439]: 2025-07-14 21:47:37.746 [INFO][4491] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 21:47:37.788914 containerd[1439]: 2025-07-14 21:47:37.752 [INFO][4491] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 21:47:37.788914 containerd[1439]: 2025-07-14 21:47:37.754 [INFO][4491] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 21:47:37.788914 containerd[1439]: 2025-07-14 21:47:37.756 [INFO][4491] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 21:47:37.788914 containerd[1439]: 2025-07-14 21:47:37.756 [INFO][4491] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.960ea8da0cd7c4a1ee9fe66226b04e48d26fb6e2ea1c17de147791db7307113e" host="localhost" Jul 14 21:47:37.788914 containerd[1439]: 2025-07-14 21:47:37.757 [INFO][4491] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.960ea8da0cd7c4a1ee9fe66226b04e48d26fb6e2ea1c17de147791db7307113e Jul 14 21:47:37.788914 containerd[1439]: 2025-07-14 21:47:37.760 [INFO][4491] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.960ea8da0cd7c4a1ee9fe66226b04e48d26fb6e2ea1c17de147791db7307113e" host="localhost" Jul 14 21:47:37.788914 containerd[1439]: 2025-07-14 21:47:37.765 [INFO][4491] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.960ea8da0cd7c4a1ee9fe66226b04e48d26fb6e2ea1c17de147791db7307113e" host="localhost" Jul 14 21:47:37.788914 containerd[1439]: 2025-07-14 21:47:37.765 [INFO][4491] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.960ea8da0cd7c4a1ee9fe66226b04e48d26fb6e2ea1c17de147791db7307113e" host="localhost" Jul 14 21:47:37.788914 containerd[1439]: 2025-07-14 21:47:37.765 [INFO][4491] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:47:37.788914 containerd[1439]: 2025-07-14 21:47:37.765 [INFO][4491] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="960ea8da0cd7c4a1ee9fe66226b04e48d26fb6e2ea1c17de147791db7307113e" HandleID="k8s-pod-network.960ea8da0cd7c4a1ee9fe66226b04e48d26fb6e2ea1c17de147791db7307113e" Workload="localhost-k8s-calico--apiserver--64779d767d--x5ktd-eth0" Jul 14 21:47:37.789507 containerd[1439]: 2025-07-14 21:47:37.767 [INFO][4463] cni-plugin/k8s.go 418: Populated endpoint ContainerID="960ea8da0cd7c4a1ee9fe66226b04e48d26fb6e2ea1c17de147791db7307113e" Namespace="calico-apiserver" Pod="calico-apiserver-64779d767d-x5ktd" WorkloadEndpoint="localhost-k8s-calico--apiserver--64779d767d--x5ktd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64779d767d--x5ktd-eth0", GenerateName:"calico-apiserver-64779d767d-", Namespace:"calico-apiserver", SelfLink:"", UID:"51cd5ebd-5963-4ce2-ab69-bebc4a3c6e81", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 47, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64779d767d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-64779d767d-x5ktd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3521174f397", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:47:37.789507 containerd[1439]: 2025-07-14 21:47:37.767 [INFO][4463] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="960ea8da0cd7c4a1ee9fe66226b04e48d26fb6e2ea1c17de147791db7307113e" Namespace="calico-apiserver" Pod="calico-apiserver-64779d767d-x5ktd" WorkloadEndpoint="localhost-k8s-calico--apiserver--64779d767d--x5ktd-eth0" Jul 14 21:47:37.789507 containerd[1439]: 2025-07-14 21:47:37.767 [INFO][4463] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3521174f397 ContainerID="960ea8da0cd7c4a1ee9fe66226b04e48d26fb6e2ea1c17de147791db7307113e" Namespace="calico-apiserver" Pod="calico-apiserver-64779d767d-x5ktd" WorkloadEndpoint="localhost-k8s-calico--apiserver--64779d767d--x5ktd-eth0" Jul 14 21:47:37.789507 containerd[1439]: 2025-07-14 21:47:37.773 [INFO][4463] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="960ea8da0cd7c4a1ee9fe66226b04e48d26fb6e2ea1c17de147791db7307113e" Namespace="calico-apiserver" Pod="calico-apiserver-64779d767d-x5ktd" WorkloadEndpoint="localhost-k8s-calico--apiserver--64779d767d--x5ktd-eth0" Jul 14 21:47:37.789507 containerd[1439]: 2025-07-14 21:47:37.774 [INFO][4463] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="960ea8da0cd7c4a1ee9fe66226b04e48d26fb6e2ea1c17de147791db7307113e" Namespace="calico-apiserver" Pod="calico-apiserver-64779d767d-x5ktd" WorkloadEndpoint="localhost-k8s-calico--apiserver--64779d767d--x5ktd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64779d767d--x5ktd-eth0", GenerateName:"calico-apiserver-64779d767d-", Namespace:"calico-apiserver", SelfLink:"", UID:"51cd5ebd-5963-4ce2-ab69-bebc4a3c6e81", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 47, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64779d767d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"960ea8da0cd7c4a1ee9fe66226b04e48d26fb6e2ea1c17de147791db7307113e", Pod:"calico-apiserver-64779d767d-x5ktd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3521174f397", MAC:"0e:32:7a:f7:7c:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:47:37.789507 containerd[1439]: 2025-07-14 21:47:37.785 [INFO][4463] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="960ea8da0cd7c4a1ee9fe66226b04e48d26fb6e2ea1c17de147791db7307113e" Namespace="calico-apiserver" Pod="calico-apiserver-64779d767d-x5ktd" WorkloadEndpoint="localhost-k8s-calico--apiserver--64779d767d--x5ktd-eth0" Jul 14 21:47:37.805993 containerd[1439]: time="2025-07-14T21:47:37.805916305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:47:37.805993 containerd[1439]: time="2025-07-14T21:47:37.805967025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:47:37.805993 containerd[1439]: time="2025-07-14T21:47:37.805978225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:47:37.806324 containerd[1439]: time="2025-07-14T21:47:37.806063185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:47:37.830659 systemd[1]: Started cri-containerd-960ea8da0cd7c4a1ee9fe66226b04e48d26fb6e2ea1c17de147791db7307113e.scope - libcontainer container 960ea8da0cd7c4a1ee9fe66226b04e48d26fb6e2ea1c17de147791db7307113e. Jul 14 21:47:37.845888 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:47:37.870764 containerd[1439]: time="2025-07-14T21:47:37.870726328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64779d767d-x5ktd,Uid:51cd5ebd-5963-4ce2-ab69-bebc4a3c6e81,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"960ea8da0cd7c4a1ee9fe66226b04e48d26fb6e2ea1c17de147791db7307113e\"" Jul 14 21:47:37.890282 sshd[4474]: pam_unix(sshd:session): session closed for user core Jul 14 21:47:37.893410 systemd[1]: sshd@7-10.0.0.35:22-10.0.0.1:48278.service: Deactivated successfully. Jul 14 21:47:37.896185 systemd[1]: session-8.scope: Deactivated successfully. Jul 14 21:47:37.897947 systemd-logind[1419]: Session 8 logged out. Waiting for processes to exit. Jul 14 21:47:37.898817 systemd-logind[1419]: Removed session 8. Jul 14 21:47:38.418555 containerd[1439]: time="2025-07-14T21:47:38.418231939Z" level=info msg="StopPodSandbox for \"4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d\"" Jul 14 21:47:38.505981 containerd[1439]: 2025-07-14 21:47:38.464 [INFO][4632] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" Jul 14 21:47:38.505981 containerd[1439]: 2025-07-14 21:47:38.465 [INFO][4632] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" iface="eth0" netns="/var/run/netns/cni-73eaaf62-0688-f253-a019-a1f270d07d78" Jul 14 21:47:38.505981 containerd[1439]: 2025-07-14 21:47:38.465 [INFO][4632] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" iface="eth0" netns="/var/run/netns/cni-73eaaf62-0688-f253-a019-a1f270d07d78" Jul 14 21:47:38.505981 containerd[1439]: 2025-07-14 21:47:38.465 [INFO][4632] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" iface="eth0" netns="/var/run/netns/cni-73eaaf62-0688-f253-a019-a1f270d07d78" Jul 14 21:47:38.505981 containerd[1439]: 2025-07-14 21:47:38.466 [INFO][4632] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" Jul 14 21:47:38.505981 containerd[1439]: 2025-07-14 21:47:38.466 [INFO][4632] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" Jul 14 21:47:38.505981 containerd[1439]: 2025-07-14 21:47:38.486 [INFO][4641] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" HandleID="k8s-pod-network.4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" Workload="localhost-k8s-goldmane--768f4c5c69--tbxpx-eth0" Jul 14 21:47:38.505981 containerd[1439]: 2025-07-14 21:47:38.486 [INFO][4641] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:47:38.505981 containerd[1439]: 2025-07-14 21:47:38.486 [INFO][4641] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:47:38.505981 containerd[1439]: 2025-07-14 21:47:38.498 [WARNING][4641] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" HandleID="k8s-pod-network.4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" Workload="localhost-k8s-goldmane--768f4c5c69--tbxpx-eth0" Jul 14 21:47:38.505981 containerd[1439]: 2025-07-14 21:47:38.498 [INFO][4641] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" HandleID="k8s-pod-network.4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" Workload="localhost-k8s-goldmane--768f4c5c69--tbxpx-eth0" Jul 14 21:47:38.505981 containerd[1439]: 2025-07-14 21:47:38.500 [INFO][4641] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:47:38.505981 containerd[1439]: 2025-07-14 21:47:38.504 [INFO][4632] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" Jul 14 21:47:38.506604 containerd[1439]: time="2025-07-14T21:47:38.506573915Z" level=info msg="TearDown network for sandbox \"4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d\" successfully" Jul 14 21:47:38.506689 containerd[1439]: time="2025-07-14T21:47:38.506673716Z" level=info msg="StopPodSandbox for \"4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d\" returns successfully" Jul 14 21:47:38.507396 containerd[1439]: time="2025-07-14T21:47:38.507366997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-tbxpx,Uid:7b023aed-807c-42ef-982d-e9e0dbb828c3,Namespace:calico-system,Attempt:1,}" Jul 14 21:47:38.547514 systemd[1]: run-netns-cni\x2d73eaaf62\x2d0688\x2df253\x2da019\x2da1f270d07d78.mount: Deactivated successfully. Jul 14 21:47:38.717103 systemd-networkd[1364]: cali42692a0795a: Link UP Jul 14 21:47:38.718636 systemd-networkd[1364]: cali42692a0795a: Gained carrier Jul 14 21:47:38.734938 containerd[1439]: 2025-07-14 21:47:38.593 [INFO][4649] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 21:47:38.734938 containerd[1439]: 2025-07-14 21:47:38.613 [INFO][4649] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--tbxpx-eth0 goldmane-768f4c5c69- calico-system 7b023aed-807c-42ef-982d-e9e0dbb828c3 1007 0 2025-07-14 21:47:16 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-tbxpx eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali42692a0795a [] [] }} ContainerID="efb136d6a0c419ab06e936fd01756e18b98f2880407bcca829ec7cefe3536f65" Namespace="calico-system" Pod="goldmane-768f4c5c69-tbxpx" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--tbxpx-" Jul 14 21:47:38.734938 containerd[1439]: 2025-07-14 21:47:38.613 [INFO][4649] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="efb136d6a0c419ab06e936fd01756e18b98f2880407bcca829ec7cefe3536f65" Namespace="calico-system" Pod="goldmane-768f4c5c69-tbxpx" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--tbxpx-eth0" Jul 14 21:47:38.734938 containerd[1439]: 2025-07-14 21:47:38.668 [INFO][4667] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="efb136d6a0c419ab06e936fd01756e18b98f2880407bcca829ec7cefe3536f65" HandleID="k8s-pod-network.efb136d6a0c419ab06e936fd01756e18b98f2880407bcca829ec7cefe3536f65" Workload="localhost-k8s-goldmane--768f4c5c69--tbxpx-eth0" Jul 14 21:47:38.734938 containerd[1439]: 2025-07-14 21:47:38.669 [INFO][4667] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="efb136d6a0c419ab06e936fd01756e18b98f2880407bcca829ec7cefe3536f65" HandleID="k8s-pod-network.efb136d6a0c419ab06e936fd01756e18b98f2880407bcca829ec7cefe3536f65" Workload="localhost-k8s-goldmane--768f4c5c69--tbxpx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400012b140), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-tbxpx", "timestamp":"2025-07-14 21:47:38.668891446 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 21:47:38.734938 containerd[1439]: 2025-07-14 21:47:38.669 [INFO][4667] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:47:38.734938 containerd[1439]: 2025-07-14 21:47:38.669 [INFO][4667] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:47:38.734938 containerd[1439]: 2025-07-14 21:47:38.669 [INFO][4667] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 21:47:38.734938 containerd[1439]: 2025-07-14 21:47:38.678 [INFO][4667] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.efb136d6a0c419ab06e936fd01756e18b98f2880407bcca829ec7cefe3536f65" host="localhost" Jul 14 21:47:38.734938 containerd[1439]: 2025-07-14 21:47:38.683 [INFO][4667] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 21:47:38.734938 containerd[1439]: 2025-07-14 21:47:38.690 [INFO][4667] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 21:47:38.734938 containerd[1439]: 2025-07-14 21:47:38.693 [INFO][4667] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 21:47:38.734938 containerd[1439]: 2025-07-14 21:47:38.697 [INFO][4667] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 21:47:38.734938 containerd[1439]: 2025-07-14 21:47:38.697 [INFO][4667] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.efb136d6a0c419ab06e936fd01756e18b98f2880407bcca829ec7cefe3536f65" host="localhost" Jul 14 21:47:38.734938 containerd[1439]: 2025-07-14 21:47:38.698 [INFO][4667] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.efb136d6a0c419ab06e936fd01756e18b98f2880407bcca829ec7cefe3536f65 Jul 14 21:47:38.734938 containerd[1439]: 2025-07-14 21:47:38.704 [INFO][4667] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.efb136d6a0c419ab06e936fd01756e18b98f2880407bcca829ec7cefe3536f65" host="localhost" Jul 14 21:47:38.734938 containerd[1439]: 2025-07-14 21:47:38.711 [INFO][4667] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.efb136d6a0c419ab06e936fd01756e18b98f2880407bcca829ec7cefe3536f65" host="localhost" Jul 14 21:47:38.734938 containerd[1439]: 2025-07-14 21:47:38.712 [INFO][4667] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.efb136d6a0c419ab06e936fd01756e18b98f2880407bcca829ec7cefe3536f65" host="localhost" Jul 14 21:47:38.734938 containerd[1439]: 2025-07-14 21:47:38.712 [INFO][4667] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:47:38.734938 containerd[1439]: 2025-07-14 21:47:38.712 [INFO][4667] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="efb136d6a0c419ab06e936fd01756e18b98f2880407bcca829ec7cefe3536f65" HandleID="k8s-pod-network.efb136d6a0c419ab06e936fd01756e18b98f2880407bcca829ec7cefe3536f65" Workload="localhost-k8s-goldmane--768f4c5c69--tbxpx-eth0" Jul 14 21:47:38.735684 containerd[1439]: 2025-07-14 21:47:38.715 [INFO][4649] cni-plugin/k8s.go 418: Populated endpoint ContainerID="efb136d6a0c419ab06e936fd01756e18b98f2880407bcca829ec7cefe3536f65" Namespace="calico-system" Pod="goldmane-768f4c5c69-tbxpx" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--tbxpx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--tbxpx-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"7b023aed-807c-42ef-982d-e9e0dbb828c3", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 47, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-tbxpx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali42692a0795a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:47:38.735684 containerd[1439]: 2025-07-14 21:47:38.715 [INFO][4649] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="efb136d6a0c419ab06e936fd01756e18b98f2880407bcca829ec7cefe3536f65" Namespace="calico-system" Pod="goldmane-768f4c5c69-tbxpx" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--tbxpx-eth0" Jul 14 21:47:38.735684 containerd[1439]: 2025-07-14 21:47:38.715 [INFO][4649] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali42692a0795a ContainerID="efb136d6a0c419ab06e936fd01756e18b98f2880407bcca829ec7cefe3536f65" Namespace="calico-system" Pod="goldmane-768f4c5c69-tbxpx" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--tbxpx-eth0" Jul 14 21:47:38.735684 containerd[1439]: 2025-07-14 21:47:38.717 [INFO][4649] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="efb136d6a0c419ab06e936fd01756e18b98f2880407bcca829ec7cefe3536f65" Namespace="calico-system" Pod="goldmane-768f4c5c69-tbxpx" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--tbxpx-eth0" Jul 14 21:47:38.735684 containerd[1439]: 2025-07-14 21:47:38.719 [INFO][4649] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="efb136d6a0c419ab06e936fd01756e18b98f2880407bcca829ec7cefe3536f65" Namespace="calico-system" Pod="goldmane-768f4c5c69-tbxpx" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--tbxpx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--tbxpx-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"7b023aed-807c-42ef-982d-e9e0dbb828c3", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 47, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"efb136d6a0c419ab06e936fd01756e18b98f2880407bcca829ec7cefe3536f65", Pod:"goldmane-768f4c5c69-tbxpx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali42692a0795a", MAC:"76:30:82:b8:ef:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:47:38.735684 containerd[1439]: 2025-07-14 21:47:38.732 [INFO][4649] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="efb136d6a0c419ab06e936fd01756e18b98f2880407bcca829ec7cefe3536f65" Namespace="calico-system" Pod="goldmane-768f4c5c69-tbxpx" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--tbxpx-eth0" Jul 14 21:47:38.756296 containerd[1439]: time="2025-07-14T21:47:38.756210421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:47:38.756296 containerd[1439]: time="2025-07-14T21:47:38.756270261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:47:38.756557 containerd[1439]: time="2025-07-14T21:47:38.756288861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:47:38.756557 containerd[1439]: time="2025-07-14T21:47:38.756377861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:47:38.762600 systemd-networkd[1364]: cali519b3e148c2: Gained IPv6LL Jul 14 21:47:38.787671 systemd[1]: Started cri-containerd-efb136d6a0c419ab06e936fd01756e18b98f2880407bcca829ec7cefe3536f65.scope - libcontainer container efb136d6a0c419ab06e936fd01756e18b98f2880407bcca829ec7cefe3536f65. Jul 14 21:47:38.802015 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:47:38.821249 containerd[1439]: time="2025-07-14T21:47:38.821098441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-tbxpx,Uid:7b023aed-807c-42ef-982d-e9e0dbb828c3,Namespace:calico-system,Attempt:1,} returns sandbox id \"efb136d6a0c419ab06e936fd01756e18b98f2880407bcca829ec7cefe3536f65\"" Jul 14 21:47:38.826569 systemd-networkd[1364]: cali8db3f3b1613: Gained IPv6LL Jul 14 21:47:39.216857 containerd[1439]: time="2025-07-14T21:47:39.216806084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:39.218775 containerd[1439]: time="2025-07-14T21:47:39.218742967Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 14 21:47:39.219709 containerd[1439]: time="2025-07-14T21:47:39.219659769Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:39.222172 containerd[1439]: time="2025-07-14T21:47:39.222124732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:39.223000 containerd[1439]: time="2025-07-14T21:47:39.222966374Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 2.281616682s" Jul 14 21:47:39.223077 containerd[1439]: time="2025-07-14T21:47:39.223002214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 14 21:47:39.225821 containerd[1439]: time="2025-07-14T21:47:39.225786498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 14 21:47:39.227851 containerd[1439]: time="2025-07-14T21:47:39.227790981Z" level=info msg="CreateContainer within sandbox \"08223b2060266e6613ea47503f4f7668468b42f8349b44fca5220c8cd9bbed9c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 14 21:47:39.236892 containerd[1439]: time="2025-07-14T21:47:39.236792995Z" level=info msg="CreateContainer within sandbox \"08223b2060266e6613ea47503f4f7668468b42f8349b44fca5220c8cd9bbed9c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8b6a03142e884d43229cfb3f9ed86f4a586abb0bcd1309b004495bb2c926b17f\"" Jul 14 21:47:39.237675 containerd[1439]: time="2025-07-14T21:47:39.237587196Z" level=info msg="StartContainer for \"8b6a03142e884d43229cfb3f9ed86f4a586abb0bcd1309b004495bb2c926b17f\"" Jul 14 21:47:39.276668 systemd[1]: Started cri-containerd-8b6a03142e884d43229cfb3f9ed86f4a586abb0bcd1309b004495bb2c926b17f.scope - libcontainer container 8b6a03142e884d43229cfb3f9ed86f4a586abb0bcd1309b004495bb2c926b17f. Jul 14 21:47:39.339693 containerd[1439]: time="2025-07-14T21:47:39.339640990Z" level=info msg="StartContainer for \"8b6a03142e884d43229cfb3f9ed86f4a586abb0bcd1309b004495bb2c926b17f\" returns successfully" Jul 14 21:47:39.418563 containerd[1439]: time="2025-07-14T21:47:39.418511588Z" level=info msg="StopPodSandbox for \"c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf\"" Jul 14 21:47:39.419453 containerd[1439]: time="2025-07-14T21:47:39.419149269Z" level=info msg="StopPodSandbox for \"3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d\"" Jul 14 21:47:39.522398 containerd[1439]: 2025-07-14 21:47:39.481 [INFO][4802] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" Jul 14 21:47:39.522398 containerd[1439]: 2025-07-14 21:47:39.482 [INFO][4802] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" iface="eth0" netns="/var/run/netns/cni-b0d13289-7c60-3eb7-f879-5b319070ebb2" Jul 14 21:47:39.522398 containerd[1439]: 2025-07-14 21:47:39.483 [INFO][4802] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" iface="eth0" netns="/var/run/netns/cni-b0d13289-7c60-3eb7-f879-5b319070ebb2" Jul 14 21:47:39.522398 containerd[1439]: 2025-07-14 21:47:39.483 [INFO][4802] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" iface="eth0" netns="/var/run/netns/cni-b0d13289-7c60-3eb7-f879-5b319070ebb2" Jul 14 21:47:39.522398 containerd[1439]: 2025-07-14 21:47:39.483 [INFO][4802] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" Jul 14 21:47:39.522398 containerd[1439]: 2025-07-14 21:47:39.483 [INFO][4802] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" Jul 14 21:47:39.522398 containerd[1439]: 2025-07-14 21:47:39.504 [INFO][4832] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" HandleID="k8s-pod-network.c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" Workload="localhost-k8s-coredns--674b8bbfcf--99stm-eth0" Jul 14 21:47:39.522398 containerd[1439]: 2025-07-14 21:47:39.504 [INFO][4832] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:47:39.522398 containerd[1439]: 2025-07-14 21:47:39.504 [INFO][4832] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:47:39.522398 containerd[1439]: 2025-07-14 21:47:39.515 [WARNING][4832] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" HandleID="k8s-pod-network.c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" Workload="localhost-k8s-coredns--674b8bbfcf--99stm-eth0" Jul 14 21:47:39.522398 containerd[1439]: 2025-07-14 21:47:39.515 [INFO][4832] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" HandleID="k8s-pod-network.c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" Workload="localhost-k8s-coredns--674b8bbfcf--99stm-eth0" Jul 14 21:47:39.522398 containerd[1439]: 2025-07-14 21:47:39.516 [INFO][4832] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:47:39.522398 containerd[1439]: 2025-07-14 21:47:39.518 [INFO][4802] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" Jul 14 21:47:39.523297 containerd[1439]: time="2025-07-14T21:47:39.522491985Z" level=info msg="TearDown network for sandbox \"c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf\" successfully" Jul 14 21:47:39.523297 containerd[1439]: time="2025-07-14T21:47:39.522518985Z" level=info msg="StopPodSandbox for \"c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf\" returns successfully" Jul 14 21:47:39.523401 kubelet[2465]: E0714 21:47:39.522834 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:47:39.524262 containerd[1439]: time="2025-07-14T21:47:39.523560467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-99stm,Uid:7aeaa372-8400-4d54-bcde-fb86f1edd957,Namespace:kube-system,Attempt:1,}" Jul 14 21:47:39.530625 systemd-networkd[1364]: cali3521174f397: Gained IPv6LL Jul 14 21:47:39.539463 containerd[1439]: 2025-07-14 21:47:39.481 [INFO][4812] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" Jul 14 21:47:39.539463 containerd[1439]: 2025-07-14 21:47:39.481 [INFO][4812] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" iface="eth0" netns="/var/run/netns/cni-4d263570-5595-37ec-1e83-6434b7e8abb4" Jul 14 21:47:39.539463 containerd[1439]: 2025-07-14 21:47:39.482 [INFO][4812] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" iface="eth0" netns="/var/run/netns/cni-4d263570-5595-37ec-1e83-6434b7e8abb4" Jul 14 21:47:39.539463 containerd[1439]: 2025-07-14 21:47:39.482 [INFO][4812] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" iface="eth0" netns="/var/run/netns/cni-4d263570-5595-37ec-1e83-6434b7e8abb4" Jul 14 21:47:39.539463 containerd[1439]: 2025-07-14 21:47:39.482 [INFO][4812] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" Jul 14 21:47:39.539463 containerd[1439]: 2025-07-14 21:47:39.482 [INFO][4812] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" Jul 14 21:47:39.539463 containerd[1439]: 2025-07-14 21:47:39.505 [INFO][4829] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" HandleID="k8s-pod-network.3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" Workload="localhost-k8s-calico--kube--controllers--59964484c9--cfz2t-eth0" Jul 14 21:47:39.539463 containerd[1439]: 2025-07-14 21:47:39.505 [INFO][4829] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:47:39.539463 containerd[1439]: 2025-07-14 21:47:39.516 [INFO][4829] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:47:39.539463 containerd[1439]: 2025-07-14 21:47:39.528 [WARNING][4829] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" HandleID="k8s-pod-network.3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" Workload="localhost-k8s-calico--kube--controllers--59964484c9--cfz2t-eth0" Jul 14 21:47:39.539463 containerd[1439]: 2025-07-14 21:47:39.528 [INFO][4829] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" HandleID="k8s-pod-network.3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" Workload="localhost-k8s-calico--kube--controllers--59964484c9--cfz2t-eth0" Jul 14 21:47:39.539463 containerd[1439]: 2025-07-14 21:47:39.531 [INFO][4829] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:47:39.539463 containerd[1439]: 2025-07-14 21:47:39.537 [INFO][4812] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" Jul 14 21:47:39.539862 containerd[1439]: time="2025-07-14T21:47:39.539838811Z" level=info msg="TearDown network for sandbox \"3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d\" successfully" Jul 14 21:47:39.539890 containerd[1439]: time="2025-07-14T21:47:39.539866491Z" level=info msg="StopPodSandbox for \"3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d\" returns successfully" Jul 14 21:47:39.540626 containerd[1439]: time="2025-07-14T21:47:39.540585852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59964484c9-cfz2t,Uid:caacf5f1-e0ed-4877-bf6f-031cb7eea2e7,Namespace:calico-system,Attempt:1,}" Jul 14 21:47:39.548887 systemd[1]: run-netns-cni\x2d4d263570\x2d5595\x2d37ec\x2d1e83\x2d6434b7e8abb4.mount: Deactivated successfully. Jul 14 21:47:39.549236 systemd[1]: run-netns-cni\x2db0d13289\x2d7c60\x2d3eb7\x2df879\x2d5b319070ebb2.mount: Deactivated successfully. Jul 14 21:47:39.664310 systemd-networkd[1364]: calie5ba2bfa318: Link UP Jul 14 21:47:39.664623 systemd-networkd[1364]: calie5ba2bfa318: Gained carrier Jul 14 21:47:39.678790 kubelet[2465]: I0714 21:47:39.678727 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-64779d767d-8dt8r" podStartSLOduration=25.394772136 podStartE2EDuration="27.678709661s" podCreationTimestamp="2025-07-14 21:47:12 +0000 UTC" firstStartedPulling="2025-07-14 21:47:36.940699691 +0000 UTC m=+39.626383852" lastFinishedPulling="2025-07-14 21:47:39.224637216 +0000 UTC m=+41.910321377" observedRunningTime="2025-07-14 21:47:39.625676741 +0000 UTC m=+42.311360902" watchObservedRunningTime="2025-07-14 21:47:39.678709661 +0000 UTC m=+42.364393822" Jul 14 21:47:39.681714 containerd[1439]: 2025-07-14 21:47:39.567 [INFO][4848] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 21:47:39.681714 containerd[1439]: 2025-07-14 21:47:39.582 [INFO][4848] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--99stm-eth0 coredns-674b8bbfcf- kube-system 7aeaa372-8400-4d54-bcde-fb86f1edd957 1022 0 2025-07-14 21:47:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-99stm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie5ba2bfa318 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e6f1f1e2a71213f69ed19cd6d022ac457ba70c702e9c37a483270b31010807ec" Namespace="kube-system" Pod="coredns-674b8bbfcf-99stm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--99stm-" Jul 14 21:47:39.681714 containerd[1439]: 2025-07-14 21:47:39.582 [INFO][4848] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e6f1f1e2a71213f69ed19cd6d022ac457ba70c702e9c37a483270b31010807ec" Namespace="kube-system" Pod="coredns-674b8bbfcf-99stm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--99stm-eth0" Jul 14 21:47:39.681714 containerd[1439]: 2025-07-14 21:47:39.615 [INFO][4876] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6f1f1e2a71213f69ed19cd6d022ac457ba70c702e9c37a483270b31010807ec" HandleID="k8s-pod-network.e6f1f1e2a71213f69ed19cd6d022ac457ba70c702e9c37a483270b31010807ec" Workload="localhost-k8s-coredns--674b8bbfcf--99stm-eth0" Jul 14 21:47:39.681714 containerd[1439]: 2025-07-14 21:47:39.615 [INFO][4876] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e6f1f1e2a71213f69ed19cd6d022ac457ba70c702e9c37a483270b31010807ec" HandleID="k8s-pod-network.e6f1f1e2a71213f69ed19cd6d022ac457ba70c702e9c37a483270b31010807ec" Workload="localhost-k8s-coredns--674b8bbfcf--99stm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-99stm", "timestamp":"2025-07-14 21:47:39.615659046 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 21:47:39.681714 containerd[1439]: 2025-07-14 21:47:39.615 [INFO][4876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:47:39.681714 containerd[1439]: 2025-07-14 21:47:39.616 [INFO][4876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:47:39.681714 containerd[1439]: 2025-07-14 21:47:39.616 [INFO][4876] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 21:47:39.681714 containerd[1439]: 2025-07-14 21:47:39.631 [INFO][4876] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e6f1f1e2a71213f69ed19cd6d022ac457ba70c702e9c37a483270b31010807ec" host="localhost" Jul 14 21:47:39.681714 containerd[1439]: 2025-07-14 21:47:39.636 [INFO][4876] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 21:47:39.681714 containerd[1439]: 2025-07-14 21:47:39.641 [INFO][4876] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 21:47:39.681714 containerd[1439]: 2025-07-14 21:47:39.643 [INFO][4876] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 21:47:39.681714 containerd[1439]: 2025-07-14 21:47:39.645 [INFO][4876] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 21:47:39.681714 containerd[1439]: 2025-07-14 21:47:39.645 [INFO][4876] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e6f1f1e2a71213f69ed19cd6d022ac457ba70c702e9c37a483270b31010807ec" host="localhost" Jul 14 21:47:39.681714 containerd[1439]: 2025-07-14 21:47:39.647 [INFO][4876] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e6f1f1e2a71213f69ed19cd6d022ac457ba70c702e9c37a483270b31010807ec Jul 14 21:47:39.681714 containerd[1439]: 2025-07-14 21:47:39.651 [INFO][4876] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e6f1f1e2a71213f69ed19cd6d022ac457ba70c702e9c37a483270b31010807ec" host="localhost" Jul 14 21:47:39.681714 containerd[1439]: 2025-07-14 21:47:39.656 [INFO][4876] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e6f1f1e2a71213f69ed19cd6d022ac457ba70c702e9c37a483270b31010807ec" host="localhost" Jul 14 21:47:39.681714 containerd[1439]: 2025-07-14 21:47:39.656 [INFO][4876] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e6f1f1e2a71213f69ed19cd6d022ac457ba70c702e9c37a483270b31010807ec" host="localhost" Jul 14 21:47:39.681714 containerd[1439]: 2025-07-14 21:47:39.656 [INFO][4876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:47:39.681714 containerd[1439]: 2025-07-14 21:47:39.657 [INFO][4876] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e6f1f1e2a71213f69ed19cd6d022ac457ba70c702e9c37a483270b31010807ec" HandleID="k8s-pod-network.e6f1f1e2a71213f69ed19cd6d022ac457ba70c702e9c37a483270b31010807ec" Workload="localhost-k8s-coredns--674b8bbfcf--99stm-eth0" Jul 14 21:47:39.682330 containerd[1439]: 2025-07-14 21:47:39.659 [INFO][4848] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e6f1f1e2a71213f69ed19cd6d022ac457ba70c702e9c37a483270b31010807ec" Namespace="kube-system" Pod="coredns-674b8bbfcf-99stm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--99stm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--99stm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7aeaa372-8400-4d54-bcde-fb86f1edd957", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 47, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-99stm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie5ba2bfa318", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:47:39.682330 containerd[1439]: 2025-07-14 21:47:39.659 [INFO][4848] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e6f1f1e2a71213f69ed19cd6d022ac457ba70c702e9c37a483270b31010807ec" Namespace="kube-system" Pod="coredns-674b8bbfcf-99stm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--99stm-eth0" Jul 14 21:47:39.682330 containerd[1439]: 2025-07-14 21:47:39.659 [INFO][4848] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie5ba2bfa318 ContainerID="e6f1f1e2a71213f69ed19cd6d022ac457ba70c702e9c37a483270b31010807ec" Namespace="kube-system" Pod="coredns-674b8bbfcf-99stm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--99stm-eth0" Jul 14 21:47:39.682330 containerd[1439]: 2025-07-14 21:47:39.663 [INFO][4848] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6f1f1e2a71213f69ed19cd6d022ac457ba70c702e9c37a483270b31010807ec" Namespace="kube-system" Pod="coredns-674b8bbfcf-99stm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--99stm-eth0" Jul 14 21:47:39.682330 containerd[1439]: 2025-07-14 21:47:39.665 [INFO][4848] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e6f1f1e2a71213f69ed19cd6d022ac457ba70c702e9c37a483270b31010807ec" Namespace="kube-system" Pod="coredns-674b8bbfcf-99stm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--99stm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--99stm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7aeaa372-8400-4d54-bcde-fb86f1edd957", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 47, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e6f1f1e2a71213f69ed19cd6d022ac457ba70c702e9c37a483270b31010807ec", Pod:"coredns-674b8bbfcf-99stm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie5ba2bfa318", MAC:"1e:e7:ba:d2:eb:ea", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:47:39.682330 containerd[1439]: 2025-07-14 21:47:39.679 [INFO][4848] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e6f1f1e2a71213f69ed19cd6d022ac457ba70c702e9c37a483270b31010807ec" Namespace="kube-system" Pod="coredns-674b8bbfcf-99stm" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--99stm-eth0" Jul 14 21:47:39.697563 containerd[1439]: time="2025-07-14T21:47:39.697426649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:47:39.697563 containerd[1439]: time="2025-07-14T21:47:39.697523729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:47:39.697563 containerd[1439]: time="2025-07-14T21:47:39.697539609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:47:39.697799 containerd[1439]: time="2025-07-14T21:47:39.697716329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:47:39.724074 systemd[1]: Started cri-containerd-e6f1f1e2a71213f69ed19cd6d022ac457ba70c702e9c37a483270b31010807ec.scope - libcontainer container e6f1f1e2a71213f69ed19cd6d022ac457ba70c702e9c37a483270b31010807ec. Jul 14 21:47:39.742744 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:47:39.773672 containerd[1439]: time="2025-07-14T21:47:39.773537243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-99stm,Uid:7aeaa372-8400-4d54-bcde-fb86f1edd957,Namespace:kube-system,Attempt:1,} returns sandbox id \"e6f1f1e2a71213f69ed19cd6d022ac457ba70c702e9c37a483270b31010807ec\"" Jul 14 21:47:39.776037 systemd-networkd[1364]: califfb832edf1b: Link UP Jul 14 21:47:39.776250 systemd-networkd[1364]: califfb832edf1b: Gained carrier Jul 14 21:47:39.776396 kubelet[2465]: E0714 21:47:39.776236 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:47:39.784965 containerd[1439]: time="2025-07-14T21:47:39.784904181Z" level=info msg="CreateContainer within sandbox \"e6f1f1e2a71213f69ed19cd6d022ac457ba70c702e9c37a483270b31010807ec\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 21:47:39.791136 containerd[1439]: 2025-07-14 21:47:39.575 [INFO][4859] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 21:47:39.791136 containerd[1439]: 2025-07-14 21:47:39.591 [INFO][4859] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--59964484c9--cfz2t-eth0 calico-kube-controllers-59964484c9- calico-system caacf5f1-e0ed-4877-bf6f-031cb7eea2e7 1023 0 2025-07-14 21:47:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:59964484c9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-59964484c9-cfz2t eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califfb832edf1b [] [] }} ContainerID="e376ba4d29cec1272673576ffa123115007cb0c3477c04ba6e6069f0019533c0" Namespace="calico-system" Pod="calico-kube-controllers-59964484c9-cfz2t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59964484c9--cfz2t-" Jul 14 21:47:39.791136 containerd[1439]: 2025-07-14 21:47:39.591 [INFO][4859] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e376ba4d29cec1272673576ffa123115007cb0c3477c04ba6e6069f0019533c0" Namespace="calico-system" Pod="calico-kube-controllers-59964484c9-cfz2t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59964484c9--cfz2t-eth0" Jul 14 21:47:39.791136 containerd[1439]: 2025-07-14 21:47:39.619 [INFO][4881] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e376ba4d29cec1272673576ffa123115007cb0c3477c04ba6e6069f0019533c0" HandleID="k8s-pod-network.e376ba4d29cec1272673576ffa123115007cb0c3477c04ba6e6069f0019533c0" Workload="localhost-k8s-calico--kube--controllers--59964484c9--cfz2t-eth0" Jul 14 21:47:39.791136 containerd[1439]: 2025-07-14 21:47:39.619 [INFO][4881] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e376ba4d29cec1272673576ffa123115007cb0c3477c04ba6e6069f0019533c0" HandleID="k8s-pod-network.e376ba4d29cec1272673576ffa123115007cb0c3477c04ba6e6069f0019533c0" Workload="localhost-k8s-calico--kube--controllers--59964484c9--cfz2t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000121760), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-59964484c9-cfz2t", "timestamp":"2025-07-14 21:47:39.619565491 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 21:47:39.791136 containerd[1439]: 2025-07-14 21:47:39.619 [INFO][4881] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:47:39.791136 containerd[1439]: 2025-07-14 21:47:39.656 [INFO][4881] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:47:39.791136 containerd[1439]: 2025-07-14 21:47:39.656 [INFO][4881] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 21:47:39.791136 containerd[1439]: 2025-07-14 21:47:39.734 [INFO][4881] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e376ba4d29cec1272673576ffa123115007cb0c3477c04ba6e6069f0019533c0" host="localhost" Jul 14 21:47:39.791136 containerd[1439]: 2025-07-14 21:47:39.739 [INFO][4881] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 21:47:39.791136 containerd[1439]: 2025-07-14 21:47:39.744 [INFO][4881] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 21:47:39.791136 containerd[1439]: 2025-07-14 21:47:39.746 [INFO][4881] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 21:47:39.791136 containerd[1439]: 2025-07-14 21:47:39.749 [INFO][4881] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 21:47:39.791136 containerd[1439]: 2025-07-14 21:47:39.749 [INFO][4881] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e376ba4d29cec1272673576ffa123115007cb0c3477c04ba6e6069f0019533c0" host="localhost" Jul 14 21:47:39.791136 containerd[1439]: 2025-07-14 21:47:39.752 [INFO][4881] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e376ba4d29cec1272673576ffa123115007cb0c3477c04ba6e6069f0019533c0 Jul 14 21:47:39.791136 containerd[1439]: 2025-07-14 21:47:39.760 [INFO][4881] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e376ba4d29cec1272673576ffa123115007cb0c3477c04ba6e6069f0019533c0" host="localhost" Jul 14 21:47:39.791136 containerd[1439]: 2025-07-14 21:47:39.768 [INFO][4881] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.e376ba4d29cec1272673576ffa123115007cb0c3477c04ba6e6069f0019533c0" host="localhost" Jul 14 21:47:39.791136 containerd[1439]: 2025-07-14 21:47:39.768 [INFO][4881] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.e376ba4d29cec1272673576ffa123115007cb0c3477c04ba6e6069f0019533c0" host="localhost" Jul 14 21:47:39.791136 containerd[1439]: 2025-07-14 21:47:39.768 [INFO][4881] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:47:39.791136 containerd[1439]: 2025-07-14 21:47:39.768 [INFO][4881] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="e376ba4d29cec1272673576ffa123115007cb0c3477c04ba6e6069f0019533c0" HandleID="k8s-pod-network.e376ba4d29cec1272673576ffa123115007cb0c3477c04ba6e6069f0019533c0" Workload="localhost-k8s-calico--kube--controllers--59964484c9--cfz2t-eth0" Jul 14 21:47:39.792404 containerd[1439]: 2025-07-14 21:47:39.772 [INFO][4859] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e376ba4d29cec1272673576ffa123115007cb0c3477c04ba6e6069f0019533c0" Namespace="calico-system" Pod="calico-kube-controllers-59964484c9-cfz2t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59964484c9--cfz2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--59964484c9--cfz2t-eth0", GenerateName:"calico-kube-controllers-59964484c9-", Namespace:"calico-system", SelfLink:"", UID:"caacf5f1-e0ed-4877-bf6f-031cb7eea2e7", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 47, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59964484c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-59964484c9-cfz2t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califfb832edf1b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:47:39.792404 containerd[1439]: 2025-07-14 21:47:39.772 [INFO][4859] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="e376ba4d29cec1272673576ffa123115007cb0c3477c04ba6e6069f0019533c0" Namespace="calico-system" Pod="calico-kube-controllers-59964484c9-cfz2t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59964484c9--cfz2t-eth0" Jul 14 21:47:39.792404 containerd[1439]: 2025-07-14 21:47:39.772 [INFO][4859] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califfb832edf1b ContainerID="e376ba4d29cec1272673576ffa123115007cb0c3477c04ba6e6069f0019533c0" Namespace="calico-system" Pod="calico-kube-controllers-59964484c9-cfz2t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59964484c9--cfz2t-eth0" Jul 14 21:47:39.792404 containerd[1439]: 2025-07-14 21:47:39.776 [INFO][4859] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e376ba4d29cec1272673576ffa123115007cb0c3477c04ba6e6069f0019533c0" Namespace="calico-system" Pod="calico-kube-controllers-59964484c9-cfz2t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59964484c9--cfz2t-eth0" Jul 14 21:47:39.792404 containerd[1439]: 2025-07-14 21:47:39.777 [INFO][4859] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e376ba4d29cec1272673576ffa123115007cb0c3477c04ba6e6069f0019533c0" Namespace="calico-system" Pod="calico-kube-controllers-59964484c9-cfz2t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59964484c9--cfz2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--59964484c9--cfz2t-eth0", GenerateName:"calico-kube-controllers-59964484c9-", Namespace:"calico-system", SelfLink:"", UID:"caacf5f1-e0ed-4877-bf6f-031cb7eea2e7", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 47, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59964484c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e376ba4d29cec1272673576ffa123115007cb0c3477c04ba6e6069f0019533c0", Pod:"calico-kube-controllers-59964484c9-cfz2t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califfb832edf1b", MAC:"a2:10:0f:f1:44:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:47:39.792404 containerd[1439]: 2025-07-14 21:47:39.787 [INFO][4859] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e376ba4d29cec1272673576ffa123115007cb0c3477c04ba6e6069f0019533c0" Namespace="calico-system" Pod="calico-kube-controllers-59964484c9-cfz2t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59964484c9--cfz2t-eth0" Jul 14 21:47:39.808165 containerd[1439]: time="2025-07-14T21:47:39.808119576Z" level=info msg="CreateContainer within sandbox \"e6f1f1e2a71213f69ed19cd6d022ac457ba70c702e9c37a483270b31010807ec\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f9025780584fb1fafb9f48054368a4125c2bb85fb7d5a17ebde63ce5a8c00d66\"" Jul 14 21:47:39.809103 containerd[1439]: time="2025-07-14T21:47:39.809054217Z" level=info msg="StartContainer for \"f9025780584fb1fafb9f48054368a4125c2bb85fb7d5a17ebde63ce5a8c00d66\"" Jul 14 21:47:39.821550 containerd[1439]: time="2025-07-14T21:47:39.820468114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:47:39.821550 containerd[1439]: time="2025-07-14T21:47:39.821506636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:47:39.821550 containerd[1439]: time="2025-07-14T21:47:39.821525076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:47:39.822399 containerd[1439]: time="2025-07-14T21:47:39.822140477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:47:39.850674 systemd[1]: Started cri-containerd-e376ba4d29cec1272673576ffa123115007cb0c3477c04ba6e6069f0019533c0.scope - libcontainer container e376ba4d29cec1272673576ffa123115007cb0c3477c04ba6e6069f0019533c0. Jul 14 21:47:39.852948 systemd[1]: Started cri-containerd-f9025780584fb1fafb9f48054368a4125c2bb85fb7d5a17ebde63ce5a8c00d66.scope - libcontainer container f9025780584fb1fafb9f48054368a4125c2bb85fb7d5a17ebde63ce5a8c00d66. Jul 14 21:47:39.869343 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:47:39.899002 containerd[1439]: time="2025-07-14T21:47:39.898632672Z" level=info msg="StartContainer for \"f9025780584fb1fafb9f48054368a4125c2bb85fb7d5a17ebde63ce5a8c00d66\" returns successfully" Jul 14 21:47:39.917078 containerd[1439]: time="2025-07-14T21:47:39.916988020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59964484c9-cfz2t,Uid:caacf5f1-e0ed-4877-bf6f-031cb7eea2e7,Namespace:calico-system,Attempt:1,} returns sandbox id \"e376ba4d29cec1272673576ffa123115007cb0c3477c04ba6e6069f0019533c0\"" Jul 14 21:47:39.979303 systemd-networkd[1364]: cali42692a0795a: Gained IPv6LL Jul 14 21:47:40.417834 containerd[1439]: time="2025-07-14T21:47:40.417793240Z" level=info msg="StopPodSandbox for \"8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844\"" Jul 14 21:47:40.483254 containerd[1439]: time="2025-07-14T21:47:40.482595335Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:40.484685 containerd[1439]: time="2025-07-14T21:47:40.484621458Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 14 21:47:40.486327 containerd[1439]: time="2025-07-14T21:47:40.486292620Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:40.490287 containerd[1439]: time="2025-07-14T21:47:40.490208386Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:40.491508 containerd[1439]: time="2025-07-14T21:47:40.491465228Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.26551333s" Jul 14 21:47:40.491508 containerd[1439]: time="2025-07-14T21:47:40.491503148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 14 21:47:40.492682 containerd[1439]: time="2025-07-14T21:47:40.492378029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 14 21:47:40.497942 containerd[1439]: time="2025-07-14T21:47:40.497887757Z" level=info msg="CreateContainer within sandbox \"4fca8d6ccc2cfd36d018737dcaec8c58c2f8090b70b8a1d22cbdf517f1c2eb6a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 14 21:47:40.518699 containerd[1439]: time="2025-07-14T21:47:40.515696424Z" level=info msg="CreateContainer within sandbox \"4fca8d6ccc2cfd36d018737dcaec8c58c2f8090b70b8a1d22cbdf517f1c2eb6a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8d05b94118b872e4ced11f720259a0941e2610affc92470bcbebf4d0832be8e4\"" Jul 14 21:47:40.519087 containerd[1439]: time="2025-07-14T21:47:40.519058269Z" level=info msg="StartContainer for \"8d05b94118b872e4ced11f720259a0941e2610affc92470bcbebf4d0832be8e4\"" Jul 14 21:47:40.549906 systemd[1]: run-containerd-runc-k8s.io-e6f1f1e2a71213f69ed19cd6d022ac457ba70c702e9c37a483270b31010807ec-runc.ozggWf.mount: Deactivated successfully. Jul 14 21:47:40.567583 containerd[1439]: 2025-07-14 21:47:40.481 [INFO][5055] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" Jul 14 21:47:40.567583 containerd[1439]: 2025-07-14 21:47:40.481 [INFO][5055] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" iface="eth0" netns="/var/run/netns/cni-112d4313-523b-c117-4afc-6f85583efe83" Jul 14 21:47:40.567583 containerd[1439]: 2025-07-14 21:47:40.482 [INFO][5055] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" iface="eth0" netns="/var/run/netns/cni-112d4313-523b-c117-4afc-6f85583efe83" Jul 14 21:47:40.567583 containerd[1439]: 2025-07-14 21:47:40.482 [INFO][5055] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" iface="eth0" netns="/var/run/netns/cni-112d4313-523b-c117-4afc-6f85583efe83" Jul 14 21:47:40.567583 containerd[1439]: 2025-07-14 21:47:40.482 [INFO][5055] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" Jul 14 21:47:40.567583 containerd[1439]: 2025-07-14 21:47:40.482 [INFO][5055] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" Jul 14 21:47:40.567583 containerd[1439]: 2025-07-14 21:47:40.520 [INFO][5077] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" HandleID="k8s-pod-network.8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" Workload="localhost-k8s-coredns--674b8bbfcf--n28f9-eth0" Jul 14 21:47:40.567583 containerd[1439]: 2025-07-14 21:47:40.520 [INFO][5077] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:47:40.567583 containerd[1439]: 2025-07-14 21:47:40.521 [INFO][5077] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:47:40.567583 containerd[1439]: 2025-07-14 21:47:40.531 [WARNING][5077] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" HandleID="k8s-pod-network.8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" Workload="localhost-k8s-coredns--674b8bbfcf--n28f9-eth0" Jul 14 21:47:40.567583 containerd[1439]: 2025-07-14 21:47:40.532 [INFO][5077] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" HandleID="k8s-pod-network.8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" Workload="localhost-k8s-coredns--674b8bbfcf--n28f9-eth0" Jul 14 21:47:40.567583 containerd[1439]: 2025-07-14 21:47:40.543 [INFO][5077] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:47:40.567583 containerd[1439]: 2025-07-14 21:47:40.561 [INFO][5055] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" Jul 14 21:47:40.568209 containerd[1439]: time="2025-07-14T21:47:40.568153181Z" level=info msg="TearDown network for sandbox \"8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844\" successfully" Jul 14 21:47:40.568269 containerd[1439]: time="2025-07-14T21:47:40.568233381Z" level=info msg="StopPodSandbox for \"8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844\" returns successfully" Jul 14 21:47:40.572052 kubelet[2465]: E0714 21:47:40.569768 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:47:40.573232 systemd[1]: Started cri-containerd-8d05b94118b872e4ced11f720259a0941e2610affc92470bcbebf4d0832be8e4.scope - libcontainer container 8d05b94118b872e4ced11f720259a0941e2610affc92470bcbebf4d0832be8e4. Jul 14 21:47:40.577389 systemd[1]: run-netns-cni\x2d112d4313\x2d523b\x2dc117\x2d4afc\x2d6f85583efe83.mount: Deactivated successfully. Jul 14 21:47:40.578823 containerd[1439]: time="2025-07-14T21:47:40.578781636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n28f9,Uid:8a6a9630-19de-4934-990f-1a0a5b55fdcd,Namespace:kube-system,Attempt:1,}" Jul 14 21:47:40.626223 containerd[1439]: time="2025-07-14T21:47:40.626166946Z" level=info msg="StartContainer for \"8d05b94118b872e4ced11f720259a0941e2610affc92470bcbebf4d0832be8e4\" returns successfully" Jul 14 21:47:40.635836 kubelet[2465]: E0714 21:47:40.635570 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:47:40.648877 kubelet[2465]: I0714 21:47:40.648288 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-99stm" podStartSLOduration=37.648270219 podStartE2EDuration="37.648270219s" podCreationTimestamp="2025-07-14 21:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:47:40.647414897 +0000 UTC m=+43.333099058" watchObservedRunningTime="2025-07-14 21:47:40.648270219 +0000 UTC m=+43.333954380" Jul 14 21:47:40.716815 systemd-networkd[1364]: cali0dbe579a49d: Link UP Jul 14 21:47:40.716981 systemd-networkd[1364]: cali0dbe579a49d: Gained carrier Jul 14 21:47:40.732453 containerd[1439]: 2025-07-14 21:47:40.612 [INFO][5115] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 21:47:40.732453 containerd[1439]: 2025-07-14 21:47:40.630 [INFO][5115] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--n28f9-eth0 coredns-674b8bbfcf- kube-system 8a6a9630-19de-4934-990f-1a0a5b55fdcd 1052 0 2025-07-14 21:47:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-n28f9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0dbe579a49d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6ea7c5308a21dfdc9639e35c138723b126b20bc87ba63f79f9500486076a94e0" Namespace="kube-system" Pod="coredns-674b8bbfcf-n28f9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n28f9-" Jul 14 21:47:40.732453 containerd[1439]: 2025-07-14 21:47:40.630 [INFO][5115] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ea7c5308a21dfdc9639e35c138723b126b20bc87ba63f79f9500486076a94e0" Namespace="kube-system" Pod="coredns-674b8bbfcf-n28f9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n28f9-eth0" Jul 14 21:47:40.732453 containerd[1439]: 2025-07-14 21:47:40.672 [INFO][5142] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ea7c5308a21dfdc9639e35c138723b126b20bc87ba63f79f9500486076a94e0" HandleID="k8s-pod-network.6ea7c5308a21dfdc9639e35c138723b126b20bc87ba63f79f9500486076a94e0" Workload="localhost-k8s-coredns--674b8bbfcf--n28f9-eth0" Jul 14 21:47:40.732453 containerd[1439]: 2025-07-14 21:47:40.674 [INFO][5142] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6ea7c5308a21dfdc9639e35c138723b126b20bc87ba63f79f9500486076a94e0" HandleID="k8s-pod-network.6ea7c5308a21dfdc9639e35c138723b126b20bc87ba63f79f9500486076a94e0" Workload="localhost-k8s-coredns--674b8bbfcf--n28f9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400058aac0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-n28f9", "timestamp":"2025-07-14 21:47:40.672856495 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 21:47:40.732453 containerd[1439]: 2025-07-14 21:47:40.674 [INFO][5142] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:47:40.732453 containerd[1439]: 2025-07-14 21:47:40.674 [INFO][5142] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:47:40.732453 containerd[1439]: 2025-07-14 21:47:40.674 [INFO][5142] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 21:47:40.732453 containerd[1439]: 2025-07-14 21:47:40.685 [INFO][5142] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6ea7c5308a21dfdc9639e35c138723b126b20bc87ba63f79f9500486076a94e0" host="localhost" Jul 14 21:47:40.732453 containerd[1439]: 2025-07-14 21:47:40.690 [INFO][5142] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 21:47:40.732453 containerd[1439]: 2025-07-14 21:47:40.695 [INFO][5142] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 21:47:40.732453 containerd[1439]: 2025-07-14 21:47:40.696 [INFO][5142] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 21:47:40.732453 containerd[1439]: 2025-07-14 21:47:40.698 [INFO][5142] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 21:47:40.732453 containerd[1439]: 2025-07-14 21:47:40.698 [INFO][5142] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6ea7c5308a21dfdc9639e35c138723b126b20bc87ba63f79f9500486076a94e0" host="localhost" Jul 14 21:47:40.732453 containerd[1439]: 2025-07-14 21:47:40.700 [INFO][5142] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6ea7c5308a21dfdc9639e35c138723b126b20bc87ba63f79f9500486076a94e0 Jul 14 21:47:40.732453 containerd[1439]: 2025-07-14 21:47:40.704 [INFO][5142] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6ea7c5308a21dfdc9639e35c138723b126b20bc87ba63f79f9500486076a94e0" host="localhost" Jul 14 21:47:40.732453 containerd[1439]: 2025-07-14 21:47:40.710 [INFO][5142] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.6ea7c5308a21dfdc9639e35c138723b126b20bc87ba63f79f9500486076a94e0" host="localhost" Jul 14 21:47:40.732453 containerd[1439]: 2025-07-14 21:47:40.710 [INFO][5142] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.6ea7c5308a21dfdc9639e35c138723b126b20bc87ba63f79f9500486076a94e0" host="localhost" Jul 14 21:47:40.732453 containerd[1439]: 2025-07-14 21:47:40.710 [INFO][5142] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:47:40.732453 containerd[1439]: 2025-07-14 21:47:40.710 [INFO][5142] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="6ea7c5308a21dfdc9639e35c138723b126b20bc87ba63f79f9500486076a94e0" HandleID="k8s-pod-network.6ea7c5308a21dfdc9639e35c138723b126b20bc87ba63f79f9500486076a94e0" Workload="localhost-k8s-coredns--674b8bbfcf--n28f9-eth0" Jul 14 21:47:40.733057 containerd[1439]: 2025-07-14 21:47:40.714 [INFO][5115] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ea7c5308a21dfdc9639e35c138723b126b20bc87ba63f79f9500486076a94e0" Namespace="kube-system" Pod="coredns-674b8bbfcf-n28f9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n28f9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--n28f9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8a6a9630-19de-4934-990f-1a0a5b55fdcd", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 47, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-n28f9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0dbe579a49d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:47:40.733057 containerd[1439]: 2025-07-14 21:47:40.714 [INFO][5115] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="6ea7c5308a21dfdc9639e35c138723b126b20bc87ba63f79f9500486076a94e0" Namespace="kube-system" Pod="coredns-674b8bbfcf-n28f9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n28f9-eth0" Jul 14 21:47:40.733057 containerd[1439]: 2025-07-14 21:47:40.714 [INFO][5115] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0dbe579a49d ContainerID="6ea7c5308a21dfdc9639e35c138723b126b20bc87ba63f79f9500486076a94e0" Namespace="kube-system" Pod="coredns-674b8bbfcf-n28f9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n28f9-eth0" Jul 14 21:47:40.733057 containerd[1439]: 2025-07-14 21:47:40.716 [INFO][5115] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ea7c5308a21dfdc9639e35c138723b126b20bc87ba63f79f9500486076a94e0" Namespace="kube-system" Pod="coredns-674b8bbfcf-n28f9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n28f9-eth0" Jul 14 21:47:40.733057 containerd[1439]: 2025-07-14 21:47:40.717 [INFO][5115] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ea7c5308a21dfdc9639e35c138723b126b20bc87ba63f79f9500486076a94e0" Namespace="kube-system" Pod="coredns-674b8bbfcf-n28f9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n28f9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--n28f9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8a6a9630-19de-4934-990f-1a0a5b55fdcd", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 47, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6ea7c5308a21dfdc9639e35c138723b126b20bc87ba63f79f9500486076a94e0", Pod:"coredns-674b8bbfcf-n28f9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0dbe579a49d", MAC:"46:71:08:27:0a:80", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:47:40.733057 containerd[1439]: 2025-07-14 21:47:40.729 [INFO][5115] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ea7c5308a21dfdc9639e35c138723b126b20bc87ba63f79f9500486076a94e0" Namespace="kube-system" Pod="coredns-674b8bbfcf-n28f9" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--n28f9-eth0" Jul 14 21:47:40.748011 containerd[1439]: time="2025-07-14T21:47:40.747791005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:47:40.748011 containerd[1439]: time="2025-07-14T21:47:40.747848525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:47:40.748011 containerd[1439]: time="2025-07-14T21:47:40.747859645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:47:40.748011 containerd[1439]: time="2025-07-14T21:47:40.747938765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:47:40.765689 systemd[1]: Started cri-containerd-6ea7c5308a21dfdc9639e35c138723b126b20bc87ba63f79f9500486076a94e0.scope - libcontainer container 6ea7c5308a21dfdc9639e35c138723b126b20bc87ba63f79f9500486076a94e0. Jul 14 21:47:40.780472 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:47:40.796349 containerd[1439]: time="2025-07-14T21:47:40.796312797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n28f9,Uid:8a6a9630-19de-4934-990f-1a0a5b55fdcd,Namespace:kube-system,Attempt:1,} returns sandbox id \"6ea7c5308a21dfdc9639e35c138723b126b20bc87ba63f79f9500486076a94e0\"" Jul 14 21:47:40.797464 kubelet[2465]: E0714 21:47:40.797229 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:47:40.801959 containerd[1439]: time="2025-07-14T21:47:40.801688124Z" level=info msg="CreateContainer within sandbox \"6ea7c5308a21dfdc9639e35c138723b126b20bc87ba63f79f9500486076a94e0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 21:47:40.815118 containerd[1439]: time="2025-07-14T21:47:40.815066904Z" level=info msg="CreateContainer within sandbox \"6ea7c5308a21dfdc9639e35c138723b126b20bc87ba63f79f9500486076a94e0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5f44489be9f52bc5a3e5ec10c8c69bf56f19b679d644b3b54c407dd6898aef34\"" Jul 14 21:47:40.816928 containerd[1439]: time="2025-07-14T21:47:40.816297866Z" level=info msg="StartContainer for \"5f44489be9f52bc5a3e5ec10c8c69bf56f19b679d644b3b54c407dd6898aef34\"" Jul 14 21:47:40.839613 systemd[1]: Started cri-containerd-5f44489be9f52bc5a3e5ec10c8c69bf56f19b679d644b3b54c407dd6898aef34.scope - libcontainer container 5f44489be9f52bc5a3e5ec10c8c69bf56f19b679d644b3b54c407dd6898aef34. Jul 14 21:47:40.862917 containerd[1439]: time="2025-07-14T21:47:40.862741734Z" level=info msg="StartContainer for \"5f44489be9f52bc5a3e5ec10c8c69bf56f19b679d644b3b54c407dd6898aef34\" returns successfully" Jul 14 21:47:41.194588 systemd-networkd[1364]: calie5ba2bfa318: Gained IPv6LL Jul 14 21:47:41.514679 systemd-networkd[1364]: califfb832edf1b: Gained IPv6LL Jul 14 21:47:41.646536 kubelet[2465]: E0714 21:47:41.646503 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:47:41.650317 kubelet[2465]: E0714 21:47:41.650102 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:47:41.675631 kubelet[2465]: I0714 21:47:41.675573 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-n28f9" podStartSLOduration=38.675556588 podStartE2EDuration="38.675556588s" podCreationTimestamp="2025-07-14 21:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:47:41.661916648 +0000 UTC m=+44.347600809" watchObservedRunningTime="2025-07-14 21:47:41.675556588 +0000 UTC m=+44.361240749" Jul 14 21:47:41.692010 containerd[1439]: time="2025-07-14T21:47:41.691966051Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:41.693419 containerd[1439]: time="2025-07-14T21:47:41.693374253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 14 21:47:41.695855 containerd[1439]: time="2025-07-14T21:47:41.695724937Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.203314388s" Jul 14 21:47:41.695855 containerd[1439]: time="2025-07-14T21:47:41.695843857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 14 21:47:41.697747 containerd[1439]: time="2025-07-14T21:47:41.697687580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 14 21:47:41.701004 containerd[1439]: time="2025-07-14T21:47:41.700938944Z" level=info msg="CreateContainer within sandbox \"960ea8da0cd7c4a1ee9fe66226b04e48d26fb6e2ea1c17de147791db7307113e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 14 21:47:41.718468 containerd[1439]: time="2025-07-14T21:47:41.718010049Z" level=info msg="CreateContainer within sandbox \"960ea8da0cd7c4a1ee9fe66226b04e48d26fb6e2ea1c17de147791db7307113e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ea51ee5363b8e903b5e8ed6826050a50f77d8c63bb910fcb4623fc56439742dd\"" Jul 14 21:47:41.718839 containerd[1439]: time="2025-07-14T21:47:41.718798130Z" level=info msg="StartContainer for \"ea51ee5363b8e903b5e8ed6826050a50f77d8c63bb910fcb4623fc56439742dd\"" Jul 14 21:47:41.750622 systemd[1]: Started cri-containerd-ea51ee5363b8e903b5e8ed6826050a50f77d8c63bb910fcb4623fc56439742dd.scope - libcontainer container ea51ee5363b8e903b5e8ed6826050a50f77d8c63bb910fcb4623fc56439742dd. Jul 14 21:47:41.791514 containerd[1439]: time="2025-07-14T21:47:41.790913754Z" level=info msg="StartContainer for \"ea51ee5363b8e903b5e8ed6826050a50f77d8c63bb910fcb4623fc56439742dd\" returns successfully" Jul 14 21:47:42.346560 systemd-networkd[1364]: cali0dbe579a49d: Gained IPv6LL Jul 14 21:47:42.621168 kubelet[2465]: I0714 21:47:42.621056 2465 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 21:47:42.622613 kubelet[2465]: E0714 21:47:42.621427 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:47:42.660779 kubelet[2465]: E0714 21:47:42.660735 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:47:42.661108 kubelet[2465]: E0714 21:47:42.661061 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:47:42.661389 kubelet[2465]: E0714 21:47:42.661340 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:47:42.903372 systemd[1]: Started sshd@8-10.0.0.35:22-10.0.0.1:49392.service - OpenSSH per-connection server daemon (10.0.0.1:49392). Jul 14 21:47:42.973072 sshd[5332]: Accepted publickey for core from 10.0.0.1 port 49392 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:47:42.976228 sshd[5332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:47:42.983059 systemd-logind[1419]: New session 9 of user core. Jul 14 21:47:42.990619 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 14 21:47:43.287468 kernel: bpftool[5362]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 14 21:47:43.389157 sshd[5332]: pam_unix(sshd:session): session closed for user core Jul 14 21:47:43.393670 systemd[1]: sshd@8-10.0.0.35:22-10.0.0.1:49392.service: Deactivated successfully. Jul 14 21:47:43.396222 systemd[1]: session-9.scope: Deactivated successfully. Jul 14 21:47:43.400251 systemd-logind[1419]: Session 9 logged out. Waiting for processes to exit. Jul 14 21:47:43.403826 systemd-logind[1419]: Removed session 9. Jul 14 21:47:43.574954 systemd-networkd[1364]: vxlan.calico: Link UP Jul 14 21:47:43.574963 systemd-networkd[1364]: vxlan.calico: Gained carrier Jul 14 21:47:43.667274 kubelet[2465]: E0714 21:47:43.667137 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:47:43.979070 kubelet[2465]: I0714 21:47:43.978922 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-64779d767d-x5ktd" podStartSLOduration=28.154868603 podStartE2EDuration="31.97890425s" podCreationTimestamp="2025-07-14 21:47:12 +0000 UTC" firstStartedPulling="2025-07-14 21:47:37.872537371 +0000 UTC m=+40.558221532" lastFinishedPulling="2025-07-14 21:47:41.696573058 +0000 UTC m=+44.382257179" observedRunningTime="2025-07-14 21:47:42.808404392 +0000 UTC m=+45.494088553" watchObservedRunningTime="2025-07-14 21:47:43.97890425 +0000 UTC m=+46.664588451" Jul 14 21:47:44.011255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount978912486.mount: Deactivated successfully. Jul 14 21:47:44.412050 containerd[1439]: time="2025-07-14T21:47:44.411996595Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:44.412617 containerd[1439]: time="2025-07-14T21:47:44.412422116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 14 21:47:44.413364 containerd[1439]: time="2025-07-14T21:47:44.413331557Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:44.416269 containerd[1439]: time="2025-07-14T21:47:44.416217601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:44.416824 containerd[1439]: time="2025-07-14T21:47:44.416791602Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.719059782s" Jul 14 21:47:44.416878 containerd[1439]: time="2025-07-14T21:47:44.416822562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 14 21:47:44.418621 containerd[1439]: time="2025-07-14T21:47:44.418590764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 14 21:47:44.422591 containerd[1439]: time="2025-07-14T21:47:44.422555330Z" level=info msg="CreateContainer within sandbox \"efb136d6a0c419ab06e936fd01756e18b98f2880407bcca829ec7cefe3536f65\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 14 21:47:44.435128 containerd[1439]: time="2025-07-14T21:47:44.435081867Z" level=info msg="CreateContainer within sandbox \"efb136d6a0c419ab06e936fd01756e18b98f2880407bcca829ec7cefe3536f65\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"1b3b371754c1e38b2a9d629200f45f8b6e81343fbdf444f0c5a06f8b3f7ea194\"" Jul 14 21:47:44.435939 containerd[1439]: time="2025-07-14T21:47:44.435910508Z" level=info msg="StartContainer for \"1b3b371754c1e38b2a9d629200f45f8b6e81343fbdf444f0c5a06f8b3f7ea194\"" Jul 14 21:47:44.469647 systemd[1]: Started cri-containerd-1b3b371754c1e38b2a9d629200f45f8b6e81343fbdf444f0c5a06f8b3f7ea194.scope - libcontainer container 1b3b371754c1e38b2a9d629200f45f8b6e81343fbdf444f0c5a06f8b3f7ea194. Jul 14 21:47:44.498126 containerd[1439]: time="2025-07-14T21:47:44.498062472Z" level=info msg="StartContainer for \"1b3b371754c1e38b2a9d629200f45f8b6e81343fbdf444f0c5a06f8b3f7ea194\" returns successfully" Jul 14 21:47:44.690143 kubelet[2465]: I0714 21:47:44.689965 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-tbxpx" podStartSLOduration=23.094937692 podStartE2EDuration="28.689946131s" podCreationTimestamp="2025-07-14 21:47:16 +0000 UTC" firstStartedPulling="2025-07-14 21:47:38.822858684 +0000 UTC m=+41.508542805" lastFinishedPulling="2025-07-14 21:47:44.417867083 +0000 UTC m=+47.103551244" observedRunningTime="2025-07-14 21:47:44.687606848 +0000 UTC m=+47.373290969" watchObservedRunningTime="2025-07-14 21:47:44.689946131 +0000 UTC m=+47.375630292" Jul 14 21:47:45.354622 systemd-networkd[1364]: vxlan.calico: Gained IPv6LL Jul 14 21:47:47.501111 containerd[1439]: time="2025-07-14T21:47:47.501057855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:47.501881 containerd[1439]: time="2025-07-14T21:47:47.501845896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 14 21:47:47.502820 containerd[1439]: time="2025-07-14T21:47:47.502794417Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:47.505032 containerd[1439]: time="2025-07-14T21:47:47.504994780Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:47.505734 containerd[1439]: time="2025-07-14T21:47:47.505671701Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 3.087043457s" Jul 14 21:47:47.505789 containerd[1439]: time="2025-07-14T21:47:47.505737501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 14 21:47:47.507215 containerd[1439]: time="2025-07-14T21:47:47.506993382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 14 21:47:47.524619 containerd[1439]: time="2025-07-14T21:47:47.524557325Z" level=info msg="CreateContainer within sandbox \"e376ba4d29cec1272673576ffa123115007cb0c3477c04ba6e6069f0019533c0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 14 21:47:47.538924 containerd[1439]: time="2025-07-14T21:47:47.538738583Z" level=info msg="CreateContainer within sandbox \"e376ba4d29cec1272673576ffa123115007cb0c3477c04ba6e6069f0019533c0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a0dfe7f694eb838c8ea41b0669e6b4e7f6380a66d8f9cbb2df606b62aa94f6bd\"" Jul 14 21:47:47.539862 containerd[1439]: time="2025-07-14T21:47:47.539780984Z" level=info msg="StartContainer for \"a0dfe7f694eb838c8ea41b0669e6b4e7f6380a66d8f9cbb2df606b62aa94f6bd\"" Jul 14 21:47:47.572805 systemd[1]: Started cri-containerd-a0dfe7f694eb838c8ea41b0669e6b4e7f6380a66d8f9cbb2df606b62aa94f6bd.scope - libcontainer container a0dfe7f694eb838c8ea41b0669e6b4e7f6380a66d8f9cbb2df606b62aa94f6bd. Jul 14 21:47:47.609363 containerd[1439]: time="2025-07-14T21:47:47.609280233Z" level=info msg="StartContainer for \"a0dfe7f694eb838c8ea41b0669e6b4e7f6380a66d8f9cbb2df606b62aa94f6bd\" returns successfully" Jul 14 21:47:47.700765 kubelet[2465]: I0714 21:47:47.699592 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-59964484c9-cfz2t" podStartSLOduration=23.111699709 podStartE2EDuration="30.699570668s" podCreationTimestamp="2025-07-14 21:47:17 +0000 UTC" firstStartedPulling="2025-07-14 21:47:39.918957783 +0000 UTC m=+42.604641904" lastFinishedPulling="2025-07-14 21:47:47.506828702 +0000 UTC m=+50.192512863" observedRunningTime="2025-07-14 21:47:47.698518707 +0000 UTC m=+50.384202868" watchObservedRunningTime="2025-07-14 21:47:47.699570668 +0000 UTC m=+50.385254829" Jul 14 21:47:48.402343 systemd[1]: Started sshd@9-10.0.0.35:22-10.0.0.1:49406.service - OpenSSH per-connection server daemon (10.0.0.1:49406). Jul 14 21:47:48.457837 sshd[5662]: Accepted publickey for core from 10.0.0.1 port 49406 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:47:48.461361 sshd[5662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:47:48.466338 systemd-logind[1419]: New session 10 of user core. Jul 14 21:47:48.472652 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 14 21:47:48.855026 containerd[1439]: time="2025-07-14T21:47:48.854967927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:48.855756 containerd[1439]: time="2025-07-14T21:47:48.855687248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 14 21:47:48.857003 containerd[1439]: time="2025-07-14T21:47:48.856969049Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:48.860204 containerd[1439]: time="2025-07-14T21:47:48.859587812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:47:48.866634 containerd[1439]: time="2025-07-14T21:47:48.866592661Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.359548559s" Jul 14 21:47:48.866634 containerd[1439]: time="2025-07-14T21:47:48.866637101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 14 21:47:48.870356 containerd[1439]: time="2025-07-14T21:47:48.870307946Z" level=info msg="CreateContainer within sandbox \"4fca8d6ccc2cfd36d018737dcaec8c58c2f8090b70b8a1d22cbdf517f1c2eb6a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 14 21:47:48.888275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2005616074.mount: Deactivated successfully. Jul 14 21:47:48.900628 containerd[1439]: time="2025-07-14T21:47:48.900573344Z" level=info msg="CreateContainer within sandbox \"4fca8d6ccc2cfd36d018737dcaec8c58c2f8090b70b8a1d22cbdf517f1c2eb6a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2e3fe71bc1fa7ea9906a78b1747dd0fe7fe0a11f9a7661a5dcfcc2c55b6bee6b\"" Jul 14 21:47:48.901217 containerd[1439]: time="2025-07-14T21:47:48.901192145Z" level=info msg="StartContainer for \"2e3fe71bc1fa7ea9906a78b1747dd0fe7fe0a11f9a7661a5dcfcc2c55b6bee6b\"" Jul 14 21:47:48.941078 systemd[1]: Started cri-containerd-2e3fe71bc1fa7ea9906a78b1747dd0fe7fe0a11f9a7661a5dcfcc2c55b6bee6b.scope - libcontainer container 2e3fe71bc1fa7ea9906a78b1747dd0fe7fe0a11f9a7661a5dcfcc2c55b6bee6b. Jul 14 21:47:48.983486 containerd[1439]: time="2025-07-14T21:47:48.983334088Z" level=info msg="StartContainer for \"2e3fe71bc1fa7ea9906a78b1747dd0fe7fe0a11f9a7661a5dcfcc2c55b6bee6b\" returns successfully" Jul 14 21:47:49.015211 sshd[5662]: pam_unix(sshd:session): session closed for user core Jul 14 21:47:49.026294 systemd[1]: sshd@9-10.0.0.35:22-10.0.0.1:49406.service: Deactivated successfully. Jul 14 21:47:49.028264 systemd[1]: session-10.scope: Deactivated successfully. Jul 14 21:47:49.029742 systemd-logind[1419]: Session 10 logged out. Waiting for processes to exit. Jul 14 21:47:49.037709 systemd[1]: Started sshd@10-10.0.0.35:22-10.0.0.1:49408.service - OpenSSH per-connection server daemon (10.0.0.1:49408). Jul 14 21:47:49.039000 systemd-logind[1419]: Removed session 10. Jul 14 21:47:49.068037 sshd[5727]: Accepted publickey for core from 10.0.0.1 port 49408 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:47:49.069470 sshd[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:47:49.076412 systemd-logind[1419]: New session 11 of user core. Jul 14 21:47:49.088594 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 14 21:47:49.304370 sshd[5727]: pam_unix(sshd:session): session closed for user core Jul 14 21:47:49.313234 systemd[1]: sshd@10-10.0.0.35:22-10.0.0.1:49408.service: Deactivated successfully. Jul 14 21:47:49.318024 systemd[1]: session-11.scope: Deactivated successfully. Jul 14 21:47:49.322035 systemd-logind[1419]: Session 11 logged out. Waiting for processes to exit. Jul 14 21:47:49.336821 systemd[1]: Started sshd@11-10.0.0.35:22-10.0.0.1:49424.service - OpenSSH per-connection server daemon (10.0.0.1:49424). Jul 14 21:47:49.338121 systemd-logind[1419]: Removed session 11. Jul 14 21:47:49.367741 sshd[5739]: Accepted publickey for core from 10.0.0.1 port 49424 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:47:49.369187 sshd[5739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:47:49.375352 systemd-logind[1419]: New session 12 of user core. Jul 14 21:47:49.389651 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 14 21:47:49.508776 kubelet[2465]: I0714 21:47:49.508727 2465 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 14 21:47:49.511300 kubelet[2465]: I0714 21:47:49.511088 2465 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 14 21:47:49.552660 sshd[5739]: pam_unix(sshd:session): session closed for user core Jul 14 21:47:49.555533 systemd[1]: sshd@11-10.0.0.35:22-10.0.0.1:49424.service: Deactivated successfully. Jul 14 21:47:49.557401 systemd[1]: session-12.scope: Deactivated successfully. Jul 14 21:47:49.559810 systemd-logind[1419]: Session 12 logged out. Waiting for processes to exit. Jul 14 21:47:49.560870 systemd-logind[1419]: Removed session 12. Jul 14 21:47:54.565313 systemd[1]: Started sshd@12-10.0.0.35:22-10.0.0.1:47348.service - OpenSSH per-connection server daemon (10.0.0.1:47348). Jul 14 21:47:54.598554 sshd[5762]: Accepted publickey for core from 10.0.0.1 port 47348 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:47:54.599956 sshd[5762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:47:54.604716 systemd-logind[1419]: New session 13 of user core. Jul 14 21:47:54.613634 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 14 21:47:54.753646 sshd[5762]: pam_unix(sshd:session): session closed for user core Jul 14 21:47:54.763221 systemd[1]: sshd@12-10.0.0.35:22-10.0.0.1:47348.service: Deactivated successfully. Jul 14 21:47:54.764968 systemd[1]: session-13.scope: Deactivated successfully. Jul 14 21:47:54.766408 systemd-logind[1419]: Session 13 logged out. Waiting for processes to exit. Jul 14 21:47:54.777065 systemd[1]: Started sshd@13-10.0.0.35:22-10.0.0.1:47352.service - OpenSSH per-connection server daemon (10.0.0.1:47352). Jul 14 21:47:54.778040 systemd-logind[1419]: Removed session 13. Jul 14 21:47:54.810093 sshd[5777]: Accepted publickey for core from 10.0.0.1 port 47352 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:47:54.811507 sshd[5777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:47:54.815462 systemd-logind[1419]: New session 14 of user core. Jul 14 21:47:54.822658 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 14 21:47:55.078513 sshd[5777]: pam_unix(sshd:session): session closed for user core Jul 14 21:47:55.091393 systemd[1]: sshd@13-10.0.0.35:22-10.0.0.1:47352.service: Deactivated successfully. Jul 14 21:47:55.093067 systemd[1]: session-14.scope: Deactivated successfully. Jul 14 21:47:55.095200 systemd-logind[1419]: Session 14 logged out. Waiting for processes to exit. Jul 14 21:47:55.104743 systemd[1]: Started sshd@14-10.0.0.35:22-10.0.0.1:47366.service - OpenSSH per-connection server daemon (10.0.0.1:47366). Jul 14 21:47:55.105791 systemd-logind[1419]: Removed session 14. Jul 14 21:47:55.145929 sshd[5790]: Accepted publickey for core from 10.0.0.1 port 47366 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:47:55.147297 sshd[5790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:47:55.151522 systemd-logind[1419]: New session 15 of user core. Jul 14 21:47:55.157673 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 14 21:47:55.846918 sshd[5790]: pam_unix(sshd:session): session closed for user core Jul 14 21:47:55.858505 systemd[1]: sshd@14-10.0.0.35:22-10.0.0.1:47366.service: Deactivated successfully. Jul 14 21:47:55.860796 systemd[1]: session-15.scope: Deactivated successfully. Jul 14 21:47:55.863931 systemd-logind[1419]: Session 15 logged out. Waiting for processes to exit. Jul 14 21:47:55.870051 systemd[1]: Started sshd@15-10.0.0.35:22-10.0.0.1:47378.service - OpenSSH per-connection server daemon (10.0.0.1:47378). Jul 14 21:47:55.873255 systemd-logind[1419]: Removed session 15. Jul 14 21:47:55.908832 sshd[5810]: Accepted publickey for core from 10.0.0.1 port 47378 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:47:55.910483 sshd[5810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:47:55.914360 systemd-logind[1419]: New session 16 of user core. Jul 14 21:47:55.921661 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 14 21:47:56.336333 sshd[5810]: pam_unix(sshd:session): session closed for user core Jul 14 21:47:56.348232 systemd[1]: sshd@15-10.0.0.35:22-10.0.0.1:47378.service: Deactivated successfully. Jul 14 21:47:56.350136 systemd[1]: session-16.scope: Deactivated successfully. Jul 14 21:47:56.352815 systemd-logind[1419]: Session 16 logged out. Waiting for processes to exit. Jul 14 21:47:56.362096 systemd[1]: Started sshd@16-10.0.0.35:22-10.0.0.1:47394.service - OpenSSH per-connection server daemon (10.0.0.1:47394). Jul 14 21:47:56.363499 systemd-logind[1419]: Removed session 16. Jul 14 21:47:56.398595 sshd[5822]: Accepted publickey for core from 10.0.0.1 port 47394 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:47:56.400150 sshd[5822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:47:56.404226 systemd-logind[1419]: New session 17 of user core. Jul 14 21:47:56.416639 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 14 21:47:56.563284 sshd[5822]: pam_unix(sshd:session): session closed for user core Jul 14 21:47:56.567834 systemd[1]: sshd@16-10.0.0.35:22-10.0.0.1:47394.service: Deactivated successfully. Jul 14 21:47:56.569972 systemd[1]: session-17.scope: Deactivated successfully. Jul 14 21:47:56.570903 systemd-logind[1419]: Session 17 logged out. Waiting for processes to exit. Jul 14 21:47:56.571889 systemd-logind[1419]: Removed session 17. Jul 14 21:47:57.405643 containerd[1439]: time="2025-07-14T21:47:57.405603195Z" level=info msg="StopPodSandbox for \"4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468\"" Jul 14 21:47:57.540170 containerd[1439]: 2025-07-14 21:47:57.504 [WARNING][5847] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bcc6m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2c011bf7-d865-42c4-a2c0-d53c4ee5f22f", ResourceVersion:"1212", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 47, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4fca8d6ccc2cfd36d018737dcaec8c58c2f8090b70b8a1d22cbdf517f1c2eb6a", Pod:"csi-node-driver-bcc6m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali519b3e148c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:47:57.540170 containerd[1439]: 2025-07-14 21:47:57.504 [INFO][5847] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" Jul 14 21:47:57.540170 containerd[1439]: 2025-07-14 21:47:57.504 [INFO][5847] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" iface="eth0" netns="" Jul 14 21:47:57.540170 containerd[1439]: 2025-07-14 21:47:57.504 [INFO][5847] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" Jul 14 21:47:57.540170 containerd[1439]: 2025-07-14 21:47:57.504 [INFO][5847] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" Jul 14 21:47:57.540170 containerd[1439]: 2025-07-14 21:47:57.525 [INFO][5855] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" HandleID="k8s-pod-network.4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" Workload="localhost-k8s-csi--node--driver--bcc6m-eth0" Jul 14 21:47:57.540170 containerd[1439]: 2025-07-14 21:47:57.525 [INFO][5855] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:47:57.540170 containerd[1439]: 2025-07-14 21:47:57.525 [INFO][5855] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:47:57.540170 containerd[1439]: 2025-07-14 21:47:57.534 [WARNING][5855] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" HandleID="k8s-pod-network.4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" Workload="localhost-k8s-csi--node--driver--bcc6m-eth0" Jul 14 21:47:57.540170 containerd[1439]: 2025-07-14 21:47:57.534 [INFO][5855] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" HandleID="k8s-pod-network.4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" Workload="localhost-k8s-csi--node--driver--bcc6m-eth0" Jul 14 21:47:57.540170 containerd[1439]: 2025-07-14 21:47:57.536 [INFO][5855] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:47:57.540170 containerd[1439]: 2025-07-14 21:47:57.538 [INFO][5847] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" Jul 14 21:47:57.540170 containerd[1439]: time="2025-07-14T21:47:57.540106665Z" level=info msg="TearDown network for sandbox \"4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468\" successfully" Jul 14 21:47:57.540170 containerd[1439]: time="2025-07-14T21:47:57.540142265Z" level=info msg="StopPodSandbox for \"4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468\" returns successfully" Jul 14 21:47:57.544635 containerd[1439]: time="2025-07-14T21:47:57.544577310Z" level=info msg="RemovePodSandbox for \"4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468\"" Jul 14 21:47:57.552715 containerd[1439]: time="2025-07-14T21:47:57.552658519Z" level=info msg="Forcibly stopping sandbox \"4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468\"" Jul 14 21:47:57.628263 containerd[1439]: 2025-07-14 21:47:57.594 [WARNING][5874] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bcc6m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2c011bf7-d865-42c4-a2c0-d53c4ee5f22f", ResourceVersion:"1212", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 47, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4fca8d6ccc2cfd36d018737dcaec8c58c2f8090b70b8a1d22cbdf517f1c2eb6a", Pod:"csi-node-driver-bcc6m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali519b3e148c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:47:57.628263 containerd[1439]: 2025-07-14 21:47:57.594 [INFO][5874] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" Jul 14 21:47:57.628263 containerd[1439]: 2025-07-14 21:47:57.594 [INFO][5874] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" iface="eth0" netns="" Jul 14 21:47:57.628263 containerd[1439]: 2025-07-14 21:47:57.594 [INFO][5874] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" Jul 14 21:47:57.628263 containerd[1439]: 2025-07-14 21:47:57.594 [INFO][5874] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" Jul 14 21:47:57.628263 containerd[1439]: 2025-07-14 21:47:57.613 [INFO][5883] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" HandleID="k8s-pod-network.4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" Workload="localhost-k8s-csi--node--driver--bcc6m-eth0" Jul 14 21:47:57.628263 containerd[1439]: 2025-07-14 21:47:57.613 [INFO][5883] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:47:57.628263 containerd[1439]: 2025-07-14 21:47:57.613 [INFO][5883] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:47:57.628263 containerd[1439]: 2025-07-14 21:47:57.622 [WARNING][5883] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" HandleID="k8s-pod-network.4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" Workload="localhost-k8s-csi--node--driver--bcc6m-eth0" Jul 14 21:47:57.628263 containerd[1439]: 2025-07-14 21:47:57.622 [INFO][5883] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" HandleID="k8s-pod-network.4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" Workload="localhost-k8s-csi--node--driver--bcc6m-eth0" Jul 14 21:47:57.628263 containerd[1439]: 2025-07-14 21:47:57.624 [INFO][5883] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:47:57.628263 containerd[1439]: 2025-07-14 21:47:57.626 [INFO][5874] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468" Jul 14 21:47:57.628714 containerd[1439]: time="2025-07-14T21:47:57.628305524Z" level=info msg="TearDown network for sandbox \"4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468\" successfully" Jul 14 21:47:57.642289 containerd[1439]: time="2025-07-14T21:47:57.642228139Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 21:47:57.642451 containerd[1439]: time="2025-07-14T21:47:57.642326659Z" level=info msg="RemovePodSandbox \"4367033d067bce069792be70fd2a37c98026bb17d3ccaa7f54ee717b52fa9468\" returns successfully" Jul 14 21:47:57.642942 containerd[1439]: time="2025-07-14T21:47:57.642912500Z" level=info msg="StopPodSandbox for \"7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15\"" Jul 14 21:47:57.732536 containerd[1439]: 2025-07-14 21:47:57.692 [WARNING][5901] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64779d767d--x5ktd-eth0", GenerateName:"calico-apiserver-64779d767d-", Namespace:"calico-apiserver", SelfLink:"", UID:"51cd5ebd-5963-4ce2-ab69-bebc4a3c6e81", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 47, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64779d767d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"960ea8da0cd7c4a1ee9fe66226b04e48d26fb6e2ea1c17de147791db7307113e", Pod:"calico-apiserver-64779d767d-x5ktd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3521174f397", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:47:57.732536 containerd[1439]: 2025-07-14 21:47:57.692 [INFO][5901] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" Jul 14 21:47:57.732536 containerd[1439]: 2025-07-14 21:47:57.692 [INFO][5901] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" iface="eth0" netns="" Jul 14 21:47:57.732536 containerd[1439]: 2025-07-14 21:47:57.692 [INFO][5901] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" Jul 14 21:47:57.732536 containerd[1439]: 2025-07-14 21:47:57.692 [INFO][5901] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" Jul 14 21:47:57.732536 containerd[1439]: 2025-07-14 21:47:57.712 [INFO][5910] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" HandleID="k8s-pod-network.7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" Workload="localhost-k8s-calico--apiserver--64779d767d--x5ktd-eth0" Jul 14 21:47:57.732536 containerd[1439]: 2025-07-14 21:47:57.713 [INFO][5910] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:47:57.732536 containerd[1439]: 2025-07-14 21:47:57.713 [INFO][5910] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:47:57.732536 containerd[1439]: 2025-07-14 21:47:57.724 [WARNING][5910] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" HandleID="k8s-pod-network.7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" Workload="localhost-k8s-calico--apiserver--64779d767d--x5ktd-eth0" Jul 14 21:47:57.732536 containerd[1439]: 2025-07-14 21:47:57.724 [INFO][5910] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" HandleID="k8s-pod-network.7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" Workload="localhost-k8s-calico--apiserver--64779d767d--x5ktd-eth0" Jul 14 21:47:57.732536 containerd[1439]: 2025-07-14 21:47:57.726 [INFO][5910] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:47:57.732536 containerd[1439]: 2025-07-14 21:47:57.730 [INFO][5901] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" Jul 14 21:47:57.732536 containerd[1439]: time="2025-07-14T21:47:57.732239040Z" level=info msg="TearDown network for sandbox \"7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15\" successfully" Jul 14 21:47:57.732536 containerd[1439]: time="2025-07-14T21:47:57.732263200Z" level=info msg="StopPodSandbox for \"7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15\" returns successfully" Jul 14 21:47:57.734361 containerd[1439]: time="2025-07-14T21:47:57.732738400Z" level=info msg="RemovePodSandbox for \"7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15\"" Jul 14 21:47:57.734361 containerd[1439]: time="2025-07-14T21:47:57.732777240Z" level=info msg="Forcibly stopping sandbox \"7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15\"" Jul 14 21:47:57.806387 containerd[1439]: 2025-07-14 21:47:57.771 [WARNING][5928] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64779d767d--x5ktd-eth0", GenerateName:"calico-apiserver-64779d767d-", Namespace:"calico-apiserver", SelfLink:"", UID:"51cd5ebd-5963-4ce2-ab69-bebc4a3c6e81", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 47, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64779d767d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"960ea8da0cd7c4a1ee9fe66226b04e48d26fb6e2ea1c17de147791db7307113e", Pod:"calico-apiserver-64779d767d-x5ktd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3521174f397", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:47:57.806387 containerd[1439]: 2025-07-14 21:47:57.771 [INFO][5928] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" Jul 14 21:47:57.806387 containerd[1439]: 2025-07-14 21:47:57.771 [INFO][5928] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" iface="eth0" netns="" Jul 14 21:47:57.806387 containerd[1439]: 2025-07-14 21:47:57.771 [INFO][5928] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" Jul 14 21:47:57.806387 containerd[1439]: 2025-07-14 21:47:57.771 [INFO][5928] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" Jul 14 21:47:57.806387 containerd[1439]: 2025-07-14 21:47:57.791 [INFO][5937] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" HandleID="k8s-pod-network.7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" Workload="localhost-k8s-calico--apiserver--64779d767d--x5ktd-eth0" Jul 14 21:47:57.806387 containerd[1439]: 2025-07-14 21:47:57.791 [INFO][5937] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:47:57.806387 containerd[1439]: 2025-07-14 21:47:57.791 [INFO][5937] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:47:57.806387 containerd[1439]: 2025-07-14 21:47:57.801 [WARNING][5937] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" HandleID="k8s-pod-network.7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" Workload="localhost-k8s-calico--apiserver--64779d767d--x5ktd-eth0" Jul 14 21:47:57.806387 containerd[1439]: 2025-07-14 21:47:57.801 [INFO][5937] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" HandleID="k8s-pod-network.7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" Workload="localhost-k8s-calico--apiserver--64779d767d--x5ktd-eth0" Jul 14 21:47:57.806387 containerd[1439]: 2025-07-14 21:47:57.802 [INFO][5937] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:47:57.806387 containerd[1439]: 2025-07-14 21:47:57.804 [INFO][5928] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15" Jul 14 21:47:57.806810 containerd[1439]: time="2025-07-14T21:47:57.806422283Z" level=info msg="TearDown network for sandbox \"7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15\" successfully" Jul 14 21:47:57.809252 containerd[1439]: time="2025-07-14T21:47:57.809209006Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 21:47:57.809313 containerd[1439]: time="2025-07-14T21:47:57.809280966Z" level=info msg="RemovePodSandbox \"7415482a9404c7ac386aa46208546e01be595f807d3f1543d2534562a2cd4d15\" returns successfully" Jul 14 21:47:57.809793 containerd[1439]: time="2025-07-14T21:47:57.809767046Z" level=info msg="StopPodSandbox for \"3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d\"" Jul 14 21:47:57.882184 containerd[1439]: 2025-07-14 21:47:57.847 [WARNING][5955] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--59964484c9--cfz2t-eth0", GenerateName:"calico-kube-controllers-59964484c9-", Namespace:"calico-system", SelfLink:"", UID:"caacf5f1-e0ed-4877-bf6f-031cb7eea2e7", ResourceVersion:"1186", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 47, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59964484c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e376ba4d29cec1272673576ffa123115007cb0c3477c04ba6e6069f0019533c0", Pod:"calico-kube-controllers-59964484c9-cfz2t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califfb832edf1b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:47:57.882184 containerd[1439]: 2025-07-14 21:47:57.847 [INFO][5955] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" Jul 14 21:47:57.882184 containerd[1439]: 2025-07-14 21:47:57.847 [INFO][5955] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" iface="eth0" netns="" Jul 14 21:47:57.882184 containerd[1439]: 2025-07-14 21:47:57.847 [INFO][5955] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" Jul 14 21:47:57.882184 containerd[1439]: 2025-07-14 21:47:57.847 [INFO][5955] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" Jul 14 21:47:57.882184 containerd[1439]: 2025-07-14 21:47:57.868 [INFO][5964] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" HandleID="k8s-pod-network.3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" Workload="localhost-k8s-calico--kube--controllers--59964484c9--cfz2t-eth0" Jul 14 21:47:57.882184 containerd[1439]: 2025-07-14 21:47:57.868 [INFO][5964] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:47:57.882184 containerd[1439]: 2025-07-14 21:47:57.868 [INFO][5964] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:47:57.882184 containerd[1439]: 2025-07-14 21:47:57.877 [WARNING][5964] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" HandleID="k8s-pod-network.3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" Workload="localhost-k8s-calico--kube--controllers--59964484c9--cfz2t-eth0" Jul 14 21:47:57.882184 containerd[1439]: 2025-07-14 21:47:57.877 [INFO][5964] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" HandleID="k8s-pod-network.3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" Workload="localhost-k8s-calico--kube--controllers--59964484c9--cfz2t-eth0" Jul 14 21:47:57.882184 containerd[1439]: 2025-07-14 21:47:57.878 [INFO][5964] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:47:57.882184 containerd[1439]: 2025-07-14 21:47:57.880 [INFO][5955] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" Jul 14 21:47:57.882723 containerd[1439]: time="2025-07-14T21:47:57.882250607Z" level=info msg="TearDown network for sandbox \"3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d\" successfully" Jul 14 21:47:57.882723 containerd[1439]: time="2025-07-14T21:47:57.882278327Z" level=info msg="StopPodSandbox for \"3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d\" returns successfully" Jul 14 21:47:57.882900 containerd[1439]: time="2025-07-14T21:47:57.882862328Z" level=info msg="RemovePodSandbox for \"3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d\"" Jul 14 21:47:57.882900 containerd[1439]: time="2025-07-14T21:47:57.882894408Z" level=info msg="Forcibly stopping sandbox \"3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d\"" Jul 14 21:47:57.955323 containerd[1439]: 2025-07-14 21:47:57.919 [WARNING][5982] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--59964484c9--cfz2t-eth0", GenerateName:"calico-kube-controllers-59964484c9-", Namespace:"calico-system", SelfLink:"", UID:"caacf5f1-e0ed-4877-bf6f-031cb7eea2e7", ResourceVersion:"1186", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 47, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59964484c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e376ba4d29cec1272673576ffa123115007cb0c3477c04ba6e6069f0019533c0", Pod:"calico-kube-controllers-59964484c9-cfz2t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califfb832edf1b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:47:57.955323 containerd[1439]: 2025-07-14 21:47:57.920 [INFO][5982] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" Jul 14 21:47:57.955323 containerd[1439]: 2025-07-14 21:47:57.920 [INFO][5982] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" iface="eth0" netns="" Jul 14 21:47:57.955323 containerd[1439]: 2025-07-14 21:47:57.920 [INFO][5982] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" Jul 14 21:47:57.955323 containerd[1439]: 2025-07-14 21:47:57.920 [INFO][5982] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" Jul 14 21:47:57.955323 containerd[1439]: 2025-07-14 21:47:57.938 [INFO][5991] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" HandleID="k8s-pod-network.3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" Workload="localhost-k8s-calico--kube--controllers--59964484c9--cfz2t-eth0" Jul 14 21:47:57.955323 containerd[1439]: 2025-07-14 21:47:57.939 [INFO][5991] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:47:57.955323 containerd[1439]: 2025-07-14 21:47:57.939 [INFO][5991] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:47:57.955323 containerd[1439]: 2025-07-14 21:47:57.949 [WARNING][5991] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" HandleID="k8s-pod-network.3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" Workload="localhost-k8s-calico--kube--controllers--59964484c9--cfz2t-eth0" Jul 14 21:47:57.955323 containerd[1439]: 2025-07-14 21:47:57.949 [INFO][5991] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" HandleID="k8s-pod-network.3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" Workload="localhost-k8s-calico--kube--controllers--59964484c9--cfz2t-eth0" Jul 14 21:47:57.955323 containerd[1439]: 2025-07-14 21:47:57.951 [INFO][5991] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:47:57.955323 containerd[1439]: 2025-07-14 21:47:57.953 [INFO][5982] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d" Jul 14 21:47:57.955787 containerd[1439]: time="2025-07-14T21:47:57.955378209Z" level=info msg="TearDown network for sandbox \"3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d\" successfully" Jul 14 21:47:57.958304 containerd[1439]: time="2025-07-14T21:47:57.958264292Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 21:47:57.958407 containerd[1439]: time="2025-07-14T21:47:57.958358652Z" level=info msg="RemovePodSandbox \"3f0eeabaacf38b68abc4a5404f52d3f063a2c8d7fb038c7bcd6003b243196e2d\" returns successfully" Jul 14 21:47:57.959167 containerd[1439]: time="2025-07-14T21:47:57.958868133Z" level=info msg="StopPodSandbox for \"f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64\"" Jul 14 21:47:58.027722 containerd[1439]: 2025-07-14 21:47:57.994 [WARNING][6009] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64779d767d--8dt8r-eth0", GenerateName:"calico-apiserver-64779d767d-", Namespace:"calico-apiserver", SelfLink:"", UID:"63909ecb-5ac2-4278-909f-4d78ae798ccd", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 47, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64779d767d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"08223b2060266e6613ea47503f4f7668468b42f8349b44fca5220c8cd9bbed9c", Pod:"calico-apiserver-64779d767d-8dt8r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8db3f3b1613", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:47:58.027722 containerd[1439]: 2025-07-14 21:47:57.994 [INFO][6009] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" Jul 14 21:47:58.027722 containerd[1439]: 2025-07-14 21:47:57.994 [INFO][6009] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" iface="eth0" netns="" Jul 14 21:47:58.027722 containerd[1439]: 2025-07-14 21:47:57.994 [INFO][6009] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" Jul 14 21:47:58.027722 containerd[1439]: 2025-07-14 21:47:57.994 [INFO][6009] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" Jul 14 21:47:58.027722 containerd[1439]: 2025-07-14 21:47:58.013 [INFO][6018] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" HandleID="k8s-pod-network.f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" Workload="localhost-k8s-calico--apiserver--64779d767d--8dt8r-eth0" Jul 14 21:47:58.027722 containerd[1439]: 2025-07-14 21:47:58.013 [INFO][6018] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:47:58.027722 containerd[1439]: 2025-07-14 21:47:58.013 [INFO][6018] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:47:58.027722 containerd[1439]: 2025-07-14 21:47:58.022 [WARNING][6018] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" HandleID="k8s-pod-network.f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" Workload="localhost-k8s-calico--apiserver--64779d767d--8dt8r-eth0" Jul 14 21:47:58.027722 containerd[1439]: 2025-07-14 21:47:58.022 [INFO][6018] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" HandleID="k8s-pod-network.f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" Workload="localhost-k8s-calico--apiserver--64779d767d--8dt8r-eth0" Jul 14 21:47:58.027722 containerd[1439]: 2025-07-14 21:47:58.024 [INFO][6018] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:47:58.027722 containerd[1439]: 2025-07-14 21:47:58.025 [INFO][6009] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" Jul 14 21:47:58.028607 containerd[1439]: time="2025-07-14T21:47:58.027705729Z" level=info msg="TearDown network for sandbox \"f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64\" successfully" Jul 14 21:47:58.028607 containerd[1439]: time="2025-07-14T21:47:58.028504090Z" level=info msg="StopPodSandbox for \"f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64\" returns successfully" Jul 14 21:47:58.029178 containerd[1439]: time="2025-07-14T21:47:58.029157891Z" level=info msg="RemovePodSandbox for \"f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64\"" Jul 14 21:47:58.029231 containerd[1439]: time="2025-07-14T21:47:58.029187091Z" level=info msg="Forcibly stopping sandbox \"f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64\"" Jul 14 21:47:58.103238 containerd[1439]: 2025-07-14 21:47:58.066 [WARNING][6036] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--64779d767d--8dt8r-eth0", GenerateName:"calico-apiserver-64779d767d-", Namespace:"calico-apiserver", SelfLink:"", UID:"63909ecb-5ac2-4278-909f-4d78ae798ccd", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 47, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64779d767d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"08223b2060266e6613ea47503f4f7668468b42f8349b44fca5220c8cd9bbed9c", Pod:"calico-apiserver-64779d767d-8dt8r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8db3f3b1613", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:47:58.103238 containerd[1439]: 2025-07-14 21:47:58.066 [INFO][6036] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" Jul 14 21:47:58.103238 containerd[1439]: 2025-07-14 21:47:58.066 [INFO][6036] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" iface="eth0" netns="" Jul 14 21:47:58.103238 containerd[1439]: 2025-07-14 21:47:58.066 [INFO][6036] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" Jul 14 21:47:58.103238 containerd[1439]: 2025-07-14 21:47:58.066 [INFO][6036] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" Jul 14 21:47:58.103238 containerd[1439]: 2025-07-14 21:47:58.085 [INFO][6045] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" HandleID="k8s-pod-network.f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" Workload="localhost-k8s-calico--apiserver--64779d767d--8dt8r-eth0" Jul 14 21:47:58.103238 containerd[1439]: 2025-07-14 21:47:58.085 [INFO][6045] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:47:58.103238 containerd[1439]: 2025-07-14 21:47:58.085 [INFO][6045] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:47:58.103238 containerd[1439]: 2025-07-14 21:47:58.094 [WARNING][6045] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" HandleID="k8s-pod-network.f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" Workload="localhost-k8s-calico--apiserver--64779d767d--8dt8r-eth0" Jul 14 21:47:58.103238 containerd[1439]: 2025-07-14 21:47:58.094 [INFO][6045] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" HandleID="k8s-pod-network.f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" Workload="localhost-k8s-calico--apiserver--64779d767d--8dt8r-eth0" Jul 14 21:47:58.103238 containerd[1439]: 2025-07-14 21:47:58.099 [INFO][6045] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:47:58.103238 containerd[1439]: 2025-07-14 21:47:58.101 [INFO][6036] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64" Jul 14 21:47:58.103792 containerd[1439]: time="2025-07-14T21:47:58.103270573Z" level=info msg="TearDown network for sandbox \"f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64\" successfully" Jul 14 21:47:58.106577 containerd[1439]: time="2025-07-14T21:47:58.106534936Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 21:47:58.106666 containerd[1439]: time="2025-07-14T21:47:58.106617297Z" level=info msg="RemovePodSandbox \"f4e63f75a91c36f0a759c471854314bf0e41e7d3316b92f007e33974ca2e7d64\" returns successfully" Jul 14 21:47:58.107089 containerd[1439]: time="2025-07-14T21:47:58.107066937Z" level=info msg="StopPodSandbox for \"c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf\"" Jul 14 21:47:58.177536 containerd[1439]: 2025-07-14 21:47:58.144 [WARNING][6063] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--99stm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7aeaa372-8400-4d54-bcde-fb86f1edd957", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 47, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e6f1f1e2a71213f69ed19cd6d022ac457ba70c702e9c37a483270b31010807ec", Pod:"coredns-674b8bbfcf-99stm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie5ba2bfa318", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:47:58.177536 containerd[1439]: 2025-07-14 21:47:58.144 [INFO][6063] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" Jul 14 21:47:58.177536 containerd[1439]: 2025-07-14 21:47:58.144 [INFO][6063] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" iface="eth0" netns="" Jul 14 21:47:58.177536 containerd[1439]: 2025-07-14 21:47:58.144 [INFO][6063] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" Jul 14 21:47:58.177536 containerd[1439]: 2025-07-14 21:47:58.144 [INFO][6063] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" Jul 14 21:47:58.177536 containerd[1439]: 2025-07-14 21:47:58.163 [INFO][6072] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" HandleID="k8s-pod-network.c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" Workload="localhost-k8s-coredns--674b8bbfcf--99stm-eth0" Jul 14 21:47:58.177536 containerd[1439]: 2025-07-14 21:47:58.163 [INFO][6072] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:47:58.177536 containerd[1439]: 2025-07-14 21:47:58.163 [INFO][6072] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:47:58.177536 containerd[1439]: 2025-07-14 21:47:58.172 [WARNING][6072] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" HandleID="k8s-pod-network.c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" Workload="localhost-k8s-coredns--674b8bbfcf--99stm-eth0" Jul 14 21:47:58.177536 containerd[1439]: 2025-07-14 21:47:58.172 [INFO][6072] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" HandleID="k8s-pod-network.c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" Workload="localhost-k8s-coredns--674b8bbfcf--99stm-eth0" Jul 14 21:47:58.177536 containerd[1439]: 2025-07-14 21:47:58.174 [INFO][6072] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:47:58.177536 containerd[1439]: 2025-07-14 21:47:58.175 [INFO][6063] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" Jul 14 21:47:58.177963 containerd[1439]: time="2025-07-14T21:47:58.177568855Z" level=info msg="TearDown network for sandbox \"c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf\" successfully" Jul 14 21:47:58.177963 containerd[1439]: time="2025-07-14T21:47:58.177594975Z" level=info msg="StopPodSandbox for \"c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf\" returns successfully" Jul 14 21:47:58.178207 containerd[1439]: time="2025-07-14T21:47:58.178163536Z" level=info msg="RemovePodSandbox for \"c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf\"" Jul 14 21:47:58.178243 containerd[1439]: time="2025-07-14T21:47:58.178205696Z" level=info msg="Forcibly stopping sandbox \"c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf\"" Jul 14 21:47:58.242560 containerd[1439]: 2025-07-14 21:47:58.211 [WARNING][6090] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--99stm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7aeaa372-8400-4d54-bcde-fb86f1edd957", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 47, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e6f1f1e2a71213f69ed19cd6d022ac457ba70c702e9c37a483270b31010807ec", Pod:"coredns-674b8bbfcf-99stm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie5ba2bfa318", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:47:58.242560 containerd[1439]: 2025-07-14 21:47:58.211 [INFO][6090] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" Jul 14 21:47:58.242560 containerd[1439]: 2025-07-14 21:47:58.211 [INFO][6090] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" iface="eth0" netns="" Jul 14 21:47:58.242560 containerd[1439]: 2025-07-14 21:47:58.211 [INFO][6090] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" Jul 14 21:47:58.242560 containerd[1439]: 2025-07-14 21:47:58.211 [INFO][6090] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" Jul 14 21:47:58.242560 containerd[1439]: 2025-07-14 21:47:58.228 [INFO][6099] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" HandleID="k8s-pod-network.c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" Workload="localhost-k8s-coredns--674b8bbfcf--99stm-eth0" Jul 14 21:47:58.242560 containerd[1439]: 2025-07-14 21:47:58.228 [INFO][6099] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:47:58.242560 containerd[1439]: 2025-07-14 21:47:58.228 [INFO][6099] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:47:58.242560 containerd[1439]: 2025-07-14 21:47:58.237 [WARNING][6099] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" HandleID="k8s-pod-network.c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" Workload="localhost-k8s-coredns--674b8bbfcf--99stm-eth0" Jul 14 21:47:58.242560 containerd[1439]: 2025-07-14 21:47:58.237 [INFO][6099] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" HandleID="k8s-pod-network.c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" Workload="localhost-k8s-coredns--674b8bbfcf--99stm-eth0" Jul 14 21:47:58.242560 containerd[1439]: 2025-07-14 21:47:58.239 [INFO][6099] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:47:58.242560 containerd[1439]: 2025-07-14 21:47:58.240 [INFO][6090] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf" Jul 14 21:47:58.242981 containerd[1439]: time="2025-07-14T21:47:58.242596047Z" level=info msg="TearDown network for sandbox \"c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf\" successfully" Jul 14 21:47:58.253646 containerd[1439]: time="2025-07-14T21:47:58.253591019Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 21:47:58.253749 containerd[1439]: time="2025-07-14T21:47:58.253679979Z" level=info msg="RemovePodSandbox \"c2a8190b289e55ae3b816233af2e7dcf1c978e48871523066ff9a6bb7be21aaf\" returns successfully" Jul 14 21:47:58.254393 containerd[1439]: time="2025-07-14T21:47:58.254241460Z" level=info msg="StopPodSandbox for \"4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d\"" Jul 14 21:47:58.326948 containerd[1439]: 2025-07-14 21:47:58.291 [WARNING][6118] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--tbxpx-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"7b023aed-807c-42ef-982d-e9e0dbb828c3", ResourceVersion:"1147", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 47, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"efb136d6a0c419ab06e936fd01756e18b98f2880407bcca829ec7cefe3536f65", Pod:"goldmane-768f4c5c69-tbxpx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali42692a0795a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:47:58.326948 containerd[1439]: 2025-07-14 21:47:58.291 [INFO][6118] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" Jul 14 21:47:58.326948 containerd[1439]: 2025-07-14 21:47:58.291 [INFO][6118] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" iface="eth0" netns="" Jul 14 21:47:58.326948 containerd[1439]: 2025-07-14 21:47:58.291 [INFO][6118] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" Jul 14 21:47:58.326948 containerd[1439]: 2025-07-14 21:47:58.291 [INFO][6118] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" Jul 14 21:47:58.326948 containerd[1439]: 2025-07-14 21:47:58.310 [INFO][6126] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" HandleID="k8s-pod-network.4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" Workload="localhost-k8s-goldmane--768f4c5c69--tbxpx-eth0" Jul 14 21:47:58.326948 containerd[1439]: 2025-07-14 21:47:58.310 [INFO][6126] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:47:58.326948 containerd[1439]: 2025-07-14 21:47:58.310 [INFO][6126] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:47:58.326948 containerd[1439]: 2025-07-14 21:47:58.320 [WARNING][6126] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" HandleID="k8s-pod-network.4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" Workload="localhost-k8s-goldmane--768f4c5c69--tbxpx-eth0" Jul 14 21:47:58.326948 containerd[1439]: 2025-07-14 21:47:58.321 [INFO][6126] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" HandleID="k8s-pod-network.4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" Workload="localhost-k8s-goldmane--768f4c5c69--tbxpx-eth0" Jul 14 21:47:58.326948 containerd[1439]: 2025-07-14 21:47:58.323 [INFO][6126] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:47:58.326948 containerd[1439]: 2025-07-14 21:47:58.325 [INFO][6118] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" Jul 14 21:47:58.326948 containerd[1439]: time="2025-07-14T21:47:58.326912220Z" level=info msg="TearDown network for sandbox \"4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d\" successfully" Jul 14 21:47:58.326948 containerd[1439]: time="2025-07-14T21:47:58.326938340Z" level=info msg="StopPodSandbox for \"4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d\" returns successfully" Jul 14 21:47:58.328294 containerd[1439]: time="2025-07-14T21:47:58.328197382Z" level=info msg="RemovePodSandbox for \"4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d\"" Jul 14 21:47:58.328294 containerd[1439]: time="2025-07-14T21:47:58.328234822Z" level=info msg="Forcibly stopping sandbox \"4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d\"" Jul 14 21:47:58.399296 containerd[1439]: 2025-07-14 21:47:58.364 [WARNING][6145] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--tbxpx-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"7b023aed-807c-42ef-982d-e9e0dbb828c3", ResourceVersion:"1147", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 47, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"efb136d6a0c419ab06e936fd01756e18b98f2880407bcca829ec7cefe3536f65", Pod:"goldmane-768f4c5c69-tbxpx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali42692a0795a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:47:58.399296 containerd[1439]: 2025-07-14 21:47:58.365 [INFO][6145] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" Jul 14 21:47:58.399296 containerd[1439]: 2025-07-14 21:47:58.365 [INFO][6145] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" iface="eth0" netns="" Jul 14 21:47:58.399296 containerd[1439]: 2025-07-14 21:47:58.365 [INFO][6145] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" Jul 14 21:47:58.399296 containerd[1439]: 2025-07-14 21:47:58.365 [INFO][6145] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" Jul 14 21:47:58.399296 containerd[1439]: 2025-07-14 21:47:58.384 [INFO][6153] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" HandleID="k8s-pod-network.4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" Workload="localhost-k8s-goldmane--768f4c5c69--tbxpx-eth0" Jul 14 21:47:58.399296 containerd[1439]: 2025-07-14 21:47:58.384 [INFO][6153] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:47:58.399296 containerd[1439]: 2025-07-14 21:47:58.384 [INFO][6153] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:47:58.399296 containerd[1439]: 2025-07-14 21:47:58.394 [WARNING][6153] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" HandleID="k8s-pod-network.4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" Workload="localhost-k8s-goldmane--768f4c5c69--tbxpx-eth0" Jul 14 21:47:58.399296 containerd[1439]: 2025-07-14 21:47:58.394 [INFO][6153] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" HandleID="k8s-pod-network.4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" Workload="localhost-k8s-goldmane--768f4c5c69--tbxpx-eth0" Jul 14 21:47:58.399296 containerd[1439]: 2025-07-14 21:47:58.395 [INFO][6153] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:47:58.399296 containerd[1439]: 2025-07-14 21:47:58.397 [INFO][6145] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d" Jul 14 21:47:58.399763 containerd[1439]: time="2025-07-14T21:47:58.399324180Z" level=info msg="TearDown network for sandbox \"4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d\" successfully" Jul 14 21:47:58.402270 containerd[1439]: time="2025-07-14T21:47:58.402237183Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 21:47:58.402348 containerd[1439]: time="2025-07-14T21:47:58.402300343Z" level=info msg="RemovePodSandbox \"4a7b31fdf616259695360ad77a7bb2625a5b8071f624de6417400b16ab3ab17d\" returns successfully" Jul 14 21:47:58.402841 containerd[1439]: time="2025-07-14T21:47:58.402813544Z" level=info msg="StopPodSandbox for \"288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820\"" Jul 14 21:47:58.473856 containerd[1439]: 2025-07-14 21:47:58.435 [WARNING][6170] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" WorkloadEndpoint="localhost-k8s-whisker--74b74ffbd9--p9kqm-eth0" Jul 14 21:47:58.473856 containerd[1439]: 2025-07-14 21:47:58.435 [INFO][6170] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" Jul 14 21:47:58.473856 containerd[1439]: 2025-07-14 21:47:58.435 [INFO][6170] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" iface="eth0" netns="" Jul 14 21:47:58.473856 containerd[1439]: 2025-07-14 21:47:58.435 [INFO][6170] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" Jul 14 21:47:58.473856 containerd[1439]: 2025-07-14 21:47:58.435 [INFO][6170] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" Jul 14 21:47:58.473856 containerd[1439]: 2025-07-14 21:47:58.459 [INFO][6179] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" HandleID="k8s-pod-network.288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" Workload="localhost-k8s-whisker--74b74ffbd9--p9kqm-eth0" Jul 14 21:47:58.473856 containerd[1439]: 2025-07-14 21:47:58.459 [INFO][6179] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:47:58.473856 containerd[1439]: 2025-07-14 21:47:58.459 [INFO][6179] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:47:58.473856 containerd[1439]: 2025-07-14 21:47:58.468 [WARNING][6179] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" HandleID="k8s-pod-network.288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" Workload="localhost-k8s-whisker--74b74ffbd9--p9kqm-eth0" Jul 14 21:47:58.473856 containerd[1439]: 2025-07-14 21:47:58.468 [INFO][6179] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" HandleID="k8s-pod-network.288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" Workload="localhost-k8s-whisker--74b74ffbd9--p9kqm-eth0" Jul 14 21:47:58.473856 containerd[1439]: 2025-07-14 21:47:58.470 [INFO][6179] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:47:58.473856 containerd[1439]: 2025-07-14 21:47:58.471 [INFO][6170] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" Jul 14 21:47:58.473856 containerd[1439]: time="2025-07-14T21:47:58.473724742Z" level=info msg="TearDown network for sandbox \"288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820\" successfully" Jul 14 21:47:58.473856 containerd[1439]: time="2025-07-14T21:47:58.473749662Z" level=info msg="StopPodSandbox for \"288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820\" returns successfully" Jul 14 21:47:58.474539 containerd[1439]: time="2025-07-14T21:47:58.474259583Z" level=info msg="RemovePodSandbox for \"288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820\"" Jul 14 21:47:58.474539 containerd[1439]: time="2025-07-14T21:47:58.474294703Z" level=info msg="Forcibly stopping sandbox \"288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820\"" Jul 14 21:47:58.545858 containerd[1439]: 2025-07-14 21:47:58.511 [WARNING][6197] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" WorkloadEndpoint="localhost-k8s-whisker--74b74ffbd9--p9kqm-eth0" Jul 14 21:47:58.545858 containerd[1439]: 2025-07-14 21:47:58.511 [INFO][6197] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" Jul 14 21:47:58.545858 containerd[1439]: 2025-07-14 21:47:58.511 [INFO][6197] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" iface="eth0" netns="" Jul 14 21:47:58.545858 containerd[1439]: 2025-07-14 21:47:58.511 [INFO][6197] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" Jul 14 21:47:58.545858 containerd[1439]: 2025-07-14 21:47:58.511 [INFO][6197] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" Jul 14 21:47:58.545858 containerd[1439]: 2025-07-14 21:47:58.530 [INFO][6206] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" HandleID="k8s-pod-network.288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" Workload="localhost-k8s-whisker--74b74ffbd9--p9kqm-eth0" Jul 14 21:47:58.545858 containerd[1439]: 2025-07-14 21:47:58.530 [INFO][6206] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:47:58.545858 containerd[1439]: 2025-07-14 21:47:58.530 [INFO][6206] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:47:58.545858 containerd[1439]: 2025-07-14 21:47:58.540 [WARNING][6206] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" HandleID="k8s-pod-network.288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" Workload="localhost-k8s-whisker--74b74ffbd9--p9kqm-eth0" Jul 14 21:47:58.545858 containerd[1439]: 2025-07-14 21:47:58.540 [INFO][6206] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" HandleID="k8s-pod-network.288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" Workload="localhost-k8s-whisker--74b74ffbd9--p9kqm-eth0" Jul 14 21:47:58.545858 containerd[1439]: 2025-07-14 21:47:58.542 [INFO][6206] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:47:58.545858 containerd[1439]: 2025-07-14 21:47:58.544 [INFO][6197] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820" Jul 14 21:47:58.546263 containerd[1439]: time="2025-07-14T21:47:58.545960222Z" level=info msg="TearDown network for sandbox \"288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820\" successfully" Jul 14 21:47:58.549494 containerd[1439]: time="2025-07-14T21:47:58.549423706Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 21:47:58.549614 containerd[1439]: time="2025-07-14T21:47:58.549519226Z" level=info msg="RemovePodSandbox \"288af3c62c0874dbc493a3844e310b6e11794e7cc5872f184bfe72c238e88820\" returns successfully" Jul 14 21:47:58.550893 containerd[1439]: time="2025-07-14T21:47:58.550688267Z" level=info msg="StopPodSandbox for \"8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844\"" Jul 14 21:47:58.621795 containerd[1439]: 2025-07-14 21:47:58.588 [WARNING][6224] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--n28f9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8a6a9630-19de-4934-990f-1a0a5b55fdcd", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 47, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6ea7c5308a21dfdc9639e35c138723b126b20bc87ba63f79f9500486076a94e0", Pod:"coredns-674b8bbfcf-n28f9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0dbe579a49d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:47:58.621795 containerd[1439]: 2025-07-14 21:47:58.588 [INFO][6224] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" Jul 14 21:47:58.621795 containerd[1439]: 2025-07-14 21:47:58.588 [INFO][6224] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" iface="eth0" netns="" Jul 14 21:47:58.621795 containerd[1439]: 2025-07-14 21:47:58.588 [INFO][6224] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" Jul 14 21:47:58.621795 containerd[1439]: 2025-07-14 21:47:58.588 [INFO][6224] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" Jul 14 21:47:58.621795 containerd[1439]: 2025-07-14 21:47:58.607 [INFO][6234] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" HandleID="k8s-pod-network.8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" Workload="localhost-k8s-coredns--674b8bbfcf--n28f9-eth0" Jul 14 21:47:58.621795 containerd[1439]: 2025-07-14 21:47:58.607 [INFO][6234] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:47:58.621795 containerd[1439]: 2025-07-14 21:47:58.607 [INFO][6234] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:47:58.621795 containerd[1439]: 2025-07-14 21:47:58.616 [WARNING][6234] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" HandleID="k8s-pod-network.8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" Workload="localhost-k8s-coredns--674b8bbfcf--n28f9-eth0" Jul 14 21:47:58.621795 containerd[1439]: 2025-07-14 21:47:58.616 [INFO][6234] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" HandleID="k8s-pod-network.8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" Workload="localhost-k8s-coredns--674b8bbfcf--n28f9-eth0" Jul 14 21:47:58.621795 containerd[1439]: 2025-07-14 21:47:58.618 [INFO][6234] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:47:58.621795 containerd[1439]: 2025-07-14 21:47:58.620 [INFO][6224] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" Jul 14 21:47:58.622625 containerd[1439]: time="2025-07-14T21:47:58.621835786Z" level=info msg="TearDown network for sandbox \"8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844\" successfully" Jul 14 21:47:58.622625 containerd[1439]: time="2025-07-14T21:47:58.621861426Z" level=info msg="StopPodSandbox for \"8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844\" returns successfully" Jul 14 21:47:58.622625 containerd[1439]: time="2025-07-14T21:47:58.622379307Z" level=info msg="RemovePodSandbox for \"8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844\"" Jul 14 21:47:58.622625 containerd[1439]: time="2025-07-14T21:47:58.622406427Z" level=info msg="Forcibly stopping sandbox \"8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844\"" Jul 14 21:47:58.692849 containerd[1439]: 2025-07-14 21:47:58.658 [WARNING][6252] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--n28f9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8a6a9630-19de-4934-990f-1a0a5b55fdcd", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 47, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6ea7c5308a21dfdc9639e35c138723b126b20bc87ba63f79f9500486076a94e0", Pod:"coredns-674b8bbfcf-n28f9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0dbe579a49d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:47:58.692849 containerd[1439]: 2025-07-14 21:47:58.658 [INFO][6252] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" Jul 14 21:47:58.692849 containerd[1439]: 2025-07-14 21:47:58.659 [INFO][6252] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" iface="eth0" netns="" Jul 14 21:47:58.692849 containerd[1439]: 2025-07-14 21:47:58.659 [INFO][6252] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" Jul 14 21:47:58.692849 containerd[1439]: 2025-07-14 21:47:58.659 [INFO][6252] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" Jul 14 21:47:58.692849 containerd[1439]: 2025-07-14 21:47:58.679 [INFO][6261] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" HandleID="k8s-pod-network.8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" Workload="localhost-k8s-coredns--674b8bbfcf--n28f9-eth0" Jul 14 21:47:58.692849 containerd[1439]: 2025-07-14 21:47:58.679 [INFO][6261] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:47:58.692849 containerd[1439]: 2025-07-14 21:47:58.679 [INFO][6261] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:47:58.692849 containerd[1439]: 2025-07-14 21:47:58.688 [WARNING][6261] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" HandleID="k8s-pod-network.8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" Workload="localhost-k8s-coredns--674b8bbfcf--n28f9-eth0" Jul 14 21:47:58.692849 containerd[1439]: 2025-07-14 21:47:58.688 [INFO][6261] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" HandleID="k8s-pod-network.8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" Workload="localhost-k8s-coredns--674b8bbfcf--n28f9-eth0" Jul 14 21:47:58.692849 containerd[1439]: 2025-07-14 21:47:58.689 [INFO][6261] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:47:58.692849 containerd[1439]: 2025-07-14 21:47:58.691 [INFO][6252] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844" Jul 14 21:47:58.694134 containerd[1439]: time="2025-07-14T21:47:58.693057745Z" level=info msg="TearDown network for sandbox \"8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844\" successfully" Jul 14 21:47:58.696128 containerd[1439]: time="2025-07-14T21:47:58.696092548Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 21:47:58.696264 containerd[1439]: time="2025-07-14T21:47:58.696157908Z" level=info msg="RemovePodSandbox \"8a89b78715a2fe175a77277ee80dc704f05b2463465d65acc06be0f5bd5de844\" returns successfully" Jul 14 21:48:01.582635 systemd[1]: Started sshd@17-10.0.0.35:22-10.0.0.1:47404.service - OpenSSH per-connection server daemon (10.0.0.1:47404). Jul 14 21:48:01.625796 sshd[6271]: Accepted publickey for core from 10.0.0.1 port 47404 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:48:01.627176 sshd[6271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:48:01.631364 systemd-logind[1419]: New session 18 of user core. Jul 14 21:48:01.640590 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 14 21:48:01.791790 sshd[6271]: pam_unix(sshd:session): session closed for user core Jul 14 21:48:01.795539 systemd[1]: sshd@17-10.0.0.35:22-10.0.0.1:47404.service: Deactivated successfully. Jul 14 21:48:01.798799 systemd[1]: session-18.scope: Deactivated successfully. Jul 14 21:48:01.800014 systemd-logind[1419]: Session 18 logged out. Waiting for processes to exit. Jul 14 21:48:01.800943 systemd-logind[1419]: Removed session 18. Jul 14 21:48:06.803155 systemd[1]: Started sshd@18-10.0.0.35:22-10.0.0.1:48414.service - OpenSSH per-connection server daemon (10.0.0.1:48414). Jul 14 21:48:06.838908 sshd[6295]: Accepted publickey for core from 10.0.0.1 port 48414 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:48:06.840549 sshd[6295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:48:06.845706 systemd-logind[1419]: New session 19 of user core. Jul 14 21:48:06.851888 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 14 21:48:06.923816 kubelet[2465]: I0714 21:48:06.923742 2465 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bcc6m" podStartSLOduration=39.802738718 podStartE2EDuration="50.923728209s" podCreationTimestamp="2025-07-14 21:47:16 +0000 UTC" firstStartedPulling="2025-07-14 21:47:37.746487851 +0000 UTC m=+40.432171972" lastFinishedPulling="2025-07-14 21:47:48.867477302 +0000 UTC m=+51.553161463" observedRunningTime="2025-07-14 21:47:49.700859896 +0000 UTC m=+52.386544057" watchObservedRunningTime="2025-07-14 21:48:06.923728209 +0000 UTC m=+69.609412370" Jul 14 21:48:07.038886 sshd[6295]: pam_unix(sshd:session): session closed for user core Jul 14 21:48:07.042171 systemd[1]: sshd@18-10.0.0.35:22-10.0.0.1:48414.service: Deactivated successfully. Jul 14 21:48:07.045514 systemd[1]: session-19.scope: Deactivated successfully. Jul 14 21:48:07.046510 systemd-logind[1419]: Session 19 logged out. Waiting for processes to exit. Jul 14 21:48:07.047321 systemd-logind[1419]: Removed session 19. Jul 14 21:48:07.417330 kubelet[2465]: E0714 21:48:07.417291 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:48:09.416953 kubelet[2465]: E0714 21:48:09.416850 2465 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:48:12.055429 systemd[1]: Started sshd@19-10.0.0.35:22-10.0.0.1:48418.service - OpenSSH per-connection server daemon (10.0.0.1:48418). Jul 14 21:48:12.096665 sshd[6332]: Accepted publickey for core from 10.0.0.1 port 48418 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:48:12.097887 sshd[6332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:48:12.101528 systemd-logind[1419]: New session 20 of user core. Jul 14 21:48:12.110661 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 14 21:48:12.285354 sshd[6332]: pam_unix(sshd:session): session closed for user core Jul 14 21:48:12.288558 systemd[1]: sshd@19-10.0.0.35:22-10.0.0.1:48418.service: Deactivated successfully. Jul 14 21:48:12.294424 systemd[1]: session-20.scope: Deactivated successfully. Jul 14 21:48:12.296312 systemd-logind[1419]: Session 20 logged out. Waiting for processes to exit. Jul 14 21:48:12.298653 systemd-logind[1419]: Removed session 20.