Jul 11 00:16:44.954436 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 11 00:16:44.954458 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Jul 10 22:41:52 -00 2025 Jul 11 00:16:44.954469 kernel: KASLR enabled Jul 11 00:16:44.954475 kernel: efi: EFI v2.7 by EDK II Jul 11 00:16:44.954481 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 11 00:16:44.954486 kernel: random: crng init done Jul 11 00:16:44.954493 kernel: ACPI: Early table checksum verification disabled Jul 11 00:16:44.954499 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 11 00:16:44.954505 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 11 00:16:44.954513 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:16:44.954519 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:16:44.954525 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:16:44.954531 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:16:44.954537 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:16:44.954545 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:16:44.954560 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:16:44.954567 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:16:44.954573 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:16:44.954580 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 11 00:16:44.954586 kernel: NUMA: Failed to initialise from firmware Jul 11 00:16:44.954593 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:16:44.954599 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Jul 11 00:16:44.954605 kernel: Zone ranges: Jul 11 00:16:44.954611 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:16:44.954617 kernel: DMA32 empty Jul 11 00:16:44.954625 kernel: Normal empty Jul 11 00:16:44.954631 kernel: Movable zone start for each node Jul 11 00:16:44.954637 kernel: Early memory node ranges Jul 11 00:16:44.954644 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 11 00:16:44.954650 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 11 00:16:44.954656 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 11 00:16:44.954662 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 11 00:16:44.954669 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 11 00:16:44.954675 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 11 00:16:44.954681 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 11 00:16:44.954688 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:16:44.954695 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 11 00:16:44.954702 kernel: psci: probing for conduit method from ACPI. Jul 11 00:16:44.954708 kernel: psci: PSCIv1.1 detected in firmware. Jul 11 00:16:44.954715 kernel: psci: Using standard PSCI v0.2 function IDs Jul 11 00:16:44.954725 kernel: psci: Trusted OS migration not required Jul 11 00:16:44.954732 kernel: psci: SMC Calling Convention v1.1 Jul 11 00:16:44.954739 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 11 00:16:44.954747 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 11 00:16:44.954754 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 11 00:16:44.954761 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 11 00:16:44.954767 kernel: Detected PIPT I-cache on CPU0 Jul 11 00:16:44.954774 kernel: CPU features: detected: GIC system register CPU interface Jul 11 00:16:44.954781 kernel: CPU features: detected: Hardware dirty bit management Jul 11 00:16:44.954788 kernel: CPU features: detected: Spectre-v4 Jul 11 00:16:44.954794 kernel: CPU features: detected: Spectre-BHB Jul 11 00:16:44.954801 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 11 00:16:44.954808 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 11 00:16:44.954816 kernel: CPU features: detected: ARM erratum 1418040 Jul 11 00:16:44.954823 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 11 00:16:44.954829 kernel: alternatives: applying boot alternatives Jul 11 00:16:44.954837 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1479f76954ab5eb3c0ce800eb2a80ad04b273ff773a5af5c1fe82fb8feef2990 Jul 11 00:16:44.954844 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 11 00:16:44.954851 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 11 00:16:44.954857 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 11 00:16:44.954864 kernel: Fallback order for Node 0: 0 Jul 11 00:16:44.954871 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 11 00:16:44.954877 kernel: Policy zone: DMA Jul 11 00:16:44.954885 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 11 00:16:44.954893 kernel: software IO TLB: area num 4. Jul 11 00:16:44.954900 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 11 00:16:44.954907 kernel: Memory: 2386400K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185888K reserved, 0K cma-reserved) Jul 11 00:16:44.954913 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 11 00:16:44.954920 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 11 00:16:44.954927 kernel: rcu: RCU event tracing is enabled. Jul 11 00:16:44.954934 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 11 00:16:44.954941 kernel: Trampoline variant of Tasks RCU enabled. Jul 11 00:16:44.954948 kernel: Tracing variant of Tasks RCU enabled. Jul 11 00:16:44.954954 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 11 00:16:44.954961 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 11 00:16:44.954968 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 11 00:16:44.954976 kernel: GICv3: 256 SPIs implemented Jul 11 00:16:44.954982 kernel: GICv3: 0 Extended SPIs implemented Jul 11 00:16:44.954989 kernel: Root IRQ handler: gic_handle_irq Jul 11 00:16:44.954995 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 11 00:16:44.955002 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 11 00:16:44.955009 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 11 00:16:44.955016 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 11 00:16:44.955023 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 11 00:16:44.955030 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 11 00:16:44.955037 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 11 00:16:44.955044 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 11 00:16:44.955052 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:16:44.955059 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 11 00:16:44.955066 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 11 00:16:44.955073 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 11 00:16:44.955080 kernel: arm-pv: using stolen time PV Jul 11 00:16:44.955087 kernel: Console: colour dummy device 80x25 Jul 11 00:16:44.955094 kernel: ACPI: Core revision 20230628 Jul 11 00:16:44.955102 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 11 00:16:44.955109 kernel: pid_max: default: 32768 minimum: 301 Jul 11 00:16:44.955116 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 11 00:16:44.955124 kernel: landlock: Up and running. Jul 11 00:16:44.955131 kernel: SELinux: Initializing. Jul 11 00:16:44.955142 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:16:44.955150 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:16:44.955157 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:16:44.955164 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:16:44.955170 kernel: rcu: Hierarchical SRCU implementation. Jul 11 00:16:44.955177 kernel: rcu: Max phase no-delay instances is 400. Jul 11 00:16:44.955184 kernel: Platform MSI: ITS@0x8080000 domain created Jul 11 00:16:44.955193 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 11 00:16:44.955200 kernel: Remapping and enabling EFI services. Jul 11 00:16:44.955206 kernel: smp: Bringing up secondary CPUs ... Jul 11 00:16:44.955213 kernel: Detected PIPT I-cache on CPU1 Jul 11 00:16:44.955220 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 11 00:16:44.955227 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 11 00:16:44.955234 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:16:44.955241 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 11 00:16:44.955247 kernel: Detected PIPT I-cache on CPU2 Jul 11 00:16:44.955254 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 11 00:16:44.955263 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 11 00:16:44.955270 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:16:44.955282 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 11 00:16:44.955291 kernel: Detected PIPT I-cache on CPU3 Jul 11 00:16:44.955298 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 11 00:16:44.955306 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 11 00:16:44.955313 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:16:44.955320 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 11 00:16:44.955330 kernel: smp: Brought up 1 node, 4 CPUs Jul 11 00:16:44.955339 kernel: SMP: Total of 4 processors activated. Jul 11 00:16:44.955346 kernel: CPU features: detected: 32-bit EL0 Support Jul 11 00:16:44.955357 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 11 00:16:44.955366 kernel: CPU features: detected: Common not Private translations Jul 11 00:16:44.955373 kernel: CPU features: detected: CRC32 instructions Jul 11 00:16:44.955381 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 11 00:16:44.955388 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 11 00:16:44.955395 kernel: CPU features: detected: LSE atomic instructions Jul 11 00:16:44.955418 kernel: CPU features: detected: Privileged Access Never Jul 11 00:16:44.955427 kernel: CPU features: detected: RAS Extension Support Jul 11 00:16:44.955435 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 11 00:16:44.955442 kernel: CPU: All CPU(s) started at EL1 Jul 11 00:16:44.955449 kernel: alternatives: applying system-wide alternatives Jul 11 00:16:44.955461 kernel: devtmpfs: initialized Jul 11 00:16:44.955469 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 11 00:16:44.955477 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 11 00:16:44.955484 kernel: pinctrl core: initialized pinctrl subsystem Jul 11 00:16:44.955495 kernel: SMBIOS 3.0.0 present. Jul 11 00:16:44.955502 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 11 00:16:44.955513 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 11 00:16:44.955527 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 11 00:16:44.955534 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 11 00:16:44.955542 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 11 00:16:44.955554 kernel: audit: initializing netlink subsys (disabled) Jul 11 00:16:44.955561 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jul 11 00:16:44.955569 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 11 00:16:44.955578 kernel: cpuidle: using governor menu Jul 11 00:16:44.955586 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 11 00:16:44.955593 kernel: ASID allocator initialised with 32768 entries Jul 11 00:16:44.955600 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 11 00:16:44.955607 kernel: Serial: AMBA PL011 UART driver Jul 11 00:16:44.955614 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 11 00:16:44.955621 kernel: Modules: 0 pages in range for non-PLT usage Jul 11 00:16:44.955628 kernel: Modules: 509008 pages in range for PLT usage Jul 11 00:16:44.955636 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 11 00:16:44.955644 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 11 00:16:44.955651 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 11 00:16:44.955659 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 11 00:16:44.955666 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 11 00:16:44.955673 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 11 00:16:44.955681 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 11 00:16:44.955688 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 11 00:16:44.955695 kernel: ACPI: Added _OSI(Module Device) Jul 11 00:16:44.955702 kernel: ACPI: Added _OSI(Processor Device) Jul 11 00:16:44.955711 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 11 00:16:44.955718 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 11 00:16:44.955726 kernel: ACPI: Interpreter enabled Jul 11 00:16:44.955733 kernel: ACPI: Using GIC for interrupt routing Jul 11 00:16:44.955740 kernel: ACPI: MCFG table detected, 1 entries Jul 11 00:16:44.955748 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 11 00:16:44.955755 kernel: printk: console [ttyAMA0] enabled Jul 11 00:16:44.955762 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 11 00:16:44.955916 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 11 00:16:44.955995 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 11 00:16:44.956061 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 11 00:16:44.956124 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 11 00:16:44.956186 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 11 00:16:44.956196 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 11 00:16:44.956203 kernel: PCI host bridge to bus 0000:00 Jul 11 00:16:44.956274 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 11 00:16:44.956338 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 11 00:16:44.956396 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 11 00:16:44.956484 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 11 00:16:44.956575 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 11 00:16:44.956654 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 11 00:16:44.956724 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 11 00:16:44.956797 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 11 00:16:44.956882 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 11 00:16:44.956980 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 11 00:16:44.957052 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 11 00:16:44.957132 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 11 00:16:44.957195 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 11 00:16:44.957257 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 11 00:16:44.957324 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 11 00:16:44.957334 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 11 00:16:44.957342 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 11 00:16:44.957349 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 11 00:16:44.957357 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 11 00:16:44.957365 kernel: iommu: Default domain type: Translated Jul 11 00:16:44.957372 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 11 00:16:44.957380 kernel: efivars: Registered efivars operations Jul 11 00:16:44.957387 kernel: vgaarb: loaded Jul 11 00:16:44.957397 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 11 00:16:44.957415 kernel: VFS: Disk quotas dquot_6.6.0 Jul 11 00:16:44.957423 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 11 00:16:44.957431 kernel: pnp: PnP ACPI init Jul 11 00:16:44.957511 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 11 00:16:44.957522 kernel: pnp: PnP ACPI: found 1 devices Jul 11 00:16:44.957529 kernel: NET: Registered PF_INET protocol family Jul 11 00:16:44.957537 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 11 00:16:44.957553 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 11 00:16:44.957561 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 11 00:16:44.957569 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 11 00:16:44.957577 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 11 00:16:44.957585 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 11 00:16:44.957593 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:16:44.957600 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:16:44.957608 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 11 00:16:44.957616 kernel: PCI: CLS 0 bytes, default 64 Jul 11 00:16:44.957625 kernel: kvm [1]: HYP mode not available Jul 11 00:16:44.957633 kernel: Initialise system trusted keyrings Jul 11 00:16:44.957641 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 11 00:16:44.957648 kernel: Key type asymmetric registered Jul 11 00:16:44.957656 kernel: Asymmetric key parser 'x509' registered Jul 11 00:16:44.957663 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 11 00:16:44.957671 kernel: io scheduler mq-deadline registered Jul 11 00:16:44.957678 kernel: io scheduler kyber registered Jul 11 00:16:44.957686 kernel: io scheduler bfq registered Jul 11 00:16:44.957695 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 11 00:16:44.957702 kernel: ACPI: button: Power Button [PWRB] Jul 11 00:16:44.957711 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 11 00:16:44.957786 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 11 00:16:44.957796 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 11 00:16:44.957804 kernel: thunder_xcv, ver 1.0 Jul 11 00:16:44.957811 kernel: thunder_bgx, ver 1.0 Jul 11 00:16:44.957819 kernel: nicpf, ver 1.0 Jul 11 00:16:44.957826 kernel: nicvf, ver 1.0 Jul 11 00:16:44.957914 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 11 00:16:44.957981 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-11T00:16:44 UTC (1752193004) Jul 11 00:16:44.957992 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 11 00:16:44.957999 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 11 00:16:44.958007 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 11 00:16:44.958015 kernel: watchdog: Hard watchdog permanently disabled Jul 11 00:16:44.958022 kernel: NET: Registered PF_INET6 protocol family Jul 11 00:16:44.958030 kernel: Segment Routing with IPv6 Jul 11 00:16:44.958040 kernel: In-situ OAM (IOAM) with IPv6 Jul 11 00:16:44.958047 kernel: NET: Registered PF_PACKET protocol family Jul 11 00:16:44.958055 kernel: Key type dns_resolver registered Jul 11 00:16:44.958062 kernel: registered taskstats version 1 Jul 11 00:16:44.958070 kernel: Loading compiled-in X.509 certificates Jul 11 00:16:44.958077 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: 9d58afa0c1753353480d5539f26f662c9ce000cb' Jul 11 00:16:44.958085 kernel: Key type .fscrypt registered Jul 11 00:16:44.958092 kernel: Key type fscrypt-provisioning registered Jul 11 00:16:44.958100 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 11 00:16:44.958109 kernel: ima: Allocated hash algorithm: sha1 Jul 11 00:16:44.958116 kernel: ima: No architecture policies found Jul 11 00:16:44.958124 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 11 00:16:44.958131 kernel: clk: Disabling unused clocks Jul 11 00:16:44.958138 kernel: Freeing unused kernel memory: 39424K Jul 11 00:16:44.958146 kernel: Run /init as init process Jul 11 00:16:44.958153 kernel: with arguments: Jul 11 00:16:44.958161 kernel: /init Jul 11 00:16:44.958168 kernel: with environment: Jul 11 00:16:44.958177 kernel: HOME=/ Jul 11 00:16:44.958185 kernel: TERM=linux Jul 11 00:16:44.958192 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 11 00:16:44.958201 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 11 00:16:44.958211 systemd[1]: Detected virtualization kvm. Jul 11 00:16:44.958219 systemd[1]: Detected architecture arm64. Jul 11 00:16:44.958227 systemd[1]: Running in initrd. Jul 11 00:16:44.958237 systemd[1]: No hostname configured, using default hostname. Jul 11 00:16:44.958245 systemd[1]: Hostname set to . Jul 11 00:16:44.958253 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:16:44.958261 systemd[1]: Queued start job for default target initrd.target. Jul 11 00:16:44.958269 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:16:44.958278 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:16:44.958286 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 11 00:16:44.958294 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:16:44.958304 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 11 00:16:44.958313 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 11 00:16:44.958322 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 11 00:16:44.958331 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 11 00:16:44.958339 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:16:44.958347 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:16:44.958356 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:16:44.958365 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:16:44.958373 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:16:44.958381 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:16:44.958396 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:16:44.958452 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:16:44.958463 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 11 00:16:44.958471 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 11 00:16:44.958482 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:16:44.958490 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:16:44.958501 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:16:44.958510 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:16:44.958518 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 11 00:16:44.958527 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:16:44.958535 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 11 00:16:44.958543 systemd[1]: Starting systemd-fsck-usr.service... Jul 11 00:16:44.958558 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:16:44.958567 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:16:44.958577 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:16:44.958585 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 11 00:16:44.958594 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:16:44.958602 systemd[1]: Finished systemd-fsck-usr.service. Jul 11 00:16:44.958611 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 00:16:44.958621 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:16:44.958629 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:16:44.958637 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:16:44.958646 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:16:44.958677 systemd-journald[236]: Collecting audit messages is disabled. Jul 11 00:16:44.958700 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:16:44.958709 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 11 00:16:44.958716 kernel: Bridge firewalling registered Jul 11 00:16:44.958725 systemd-journald[236]: Journal started Jul 11 00:16:44.958744 systemd-journald[236]: Runtime Journal (/run/log/journal/b94a6f3f24264b418e84ad34482f86e1) is 5.9M, max 47.3M, 41.4M free. Jul 11 00:16:44.925691 systemd-modules-load[238]: Inserted module 'overlay' Jul 11 00:16:44.960289 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:16:44.959007 systemd-modules-load[238]: Inserted module 'br_netfilter' Jul 11 00:16:44.961647 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:16:44.972615 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:16:44.974367 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:16:44.976650 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:16:44.981648 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 11 00:16:44.982823 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:16:44.986709 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:16:44.989031 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:16:44.997615 dracut-cmdline[275]: dracut-dracut-053 Jul 11 00:16:45.000174 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1479f76954ab5eb3c0ce800eb2a80ad04b273ff773a5af5c1fe82fb8feef2990 Jul 11 00:16:45.019609 systemd-resolved[278]: Positive Trust Anchors: Jul 11 00:16:45.019629 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:16:45.019660 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:16:45.024521 systemd-resolved[278]: Defaulting to hostname 'linux'. Jul 11 00:16:45.029463 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:16:45.030306 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:16:45.073442 kernel: SCSI subsystem initialized Jul 11 00:16:45.078423 kernel: Loading iSCSI transport class v2.0-870. Jul 11 00:16:45.085426 kernel: iscsi: registered transport (tcp) Jul 11 00:16:45.098679 kernel: iscsi: registered transport (qla4xxx) Jul 11 00:16:45.098734 kernel: QLogic iSCSI HBA Driver Jul 11 00:16:45.145343 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 11 00:16:45.153582 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 11 00:16:45.173140 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 11 00:16:45.173190 kernel: device-mapper: uevent: version 1.0.3 Jul 11 00:16:45.174416 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 11 00:16:45.221430 kernel: raid6: neonx8 gen() 15656 MB/s Jul 11 00:16:45.238420 kernel: raid6: neonx4 gen() 15602 MB/s Jul 11 00:16:45.255420 kernel: raid6: neonx2 gen() 13209 MB/s Jul 11 00:16:45.272417 kernel: raid6: neonx1 gen() 10479 MB/s Jul 11 00:16:45.289416 kernel: raid6: int64x8 gen() 6947 MB/s Jul 11 00:16:45.306416 kernel: raid6: int64x4 gen() 7346 MB/s Jul 11 00:16:45.323416 kernel: raid6: int64x2 gen() 6131 MB/s Jul 11 00:16:45.340424 kernel: raid6: int64x1 gen() 5040 MB/s Jul 11 00:16:45.340444 kernel: raid6: using algorithm neonx8 gen() 15656 MB/s Jul 11 00:16:45.357431 kernel: raid6: .... xor() 11723 MB/s, rmw enabled Jul 11 00:16:45.357446 kernel: raid6: using neon recovery algorithm Jul 11 00:16:45.362485 kernel: xor: measuring software checksum speed Jul 11 00:16:45.362503 kernel: 8regs : 19726 MB/sec Jul 11 00:16:45.363503 kernel: 32regs : 19208 MB/sec Jul 11 00:16:45.363516 kernel: arm64_neon : 27025 MB/sec Jul 11 00:16:45.363525 kernel: xor: using function: arm64_neon (27025 MB/sec) Jul 11 00:16:45.436429 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 11 00:16:45.457965 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:16:45.469666 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:16:45.489535 systemd-udevd[461]: Using default interface naming scheme 'v255'. Jul 11 00:16:45.492966 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:16:45.500605 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 11 00:16:45.541015 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Jul 11 00:16:45.599136 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:16:45.616619 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:16:45.668786 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:16:45.675680 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 11 00:16:45.694330 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 11 00:16:45.695436 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:16:45.699366 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:16:45.700205 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:16:45.708640 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 11 00:16:45.719746 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:16:45.730774 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 11 00:16:45.734657 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 11 00:16:45.734811 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:16:45.734890 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:16:45.741995 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:16:45.743118 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:16:45.750490 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 11 00:16:45.750540 kernel: GPT:9289727 != 19775487 Jul 11 00:16:45.750572 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 11 00:16:45.750583 kernel: GPT:9289727 != 19775487 Jul 11 00:16:45.750606 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 11 00:16:45.750619 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:16:45.743195 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:16:45.747986 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:16:45.762684 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:16:45.780438 kernel: BTRFS: device fsid f5d5cad7-cb7a-4b07-bec7-847b84711ad7 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (507) Jul 11 00:16:45.778950 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:16:45.785457 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (511) Jul 11 00:16:45.789413 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 11 00:16:45.794317 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 11 00:16:45.801178 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 11 00:16:45.802479 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 11 00:16:45.808650 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:16:45.827597 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 11 00:16:45.829256 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:16:45.835067 disk-uuid[551]: Primary Header is updated. Jul 11 00:16:45.835067 disk-uuid[551]: Secondary Entries is updated. Jul 11 00:16:45.835067 disk-uuid[551]: Secondary Header is updated. Jul 11 00:16:45.838422 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:16:45.854028 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:16:45.856440 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:16:45.858426 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:16:46.858861 disk-uuid[552]: The operation has completed successfully. Jul 11 00:16:46.859932 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:16:46.886011 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 11 00:16:46.886129 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 11 00:16:46.904592 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 11 00:16:46.908820 sh[573]: Success Jul 11 00:16:46.922445 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 11 00:16:46.965968 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 11 00:16:46.967537 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 11 00:16:46.969456 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 11 00:16:46.980047 kernel: BTRFS info (device dm-0): first mount of filesystem f5d5cad7-cb7a-4b07-bec7-847b84711ad7 Jul 11 00:16:46.980087 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:16:46.980105 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 11 00:16:46.980115 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 11 00:16:46.981415 kernel: BTRFS info (device dm-0): using free space tree Jul 11 00:16:46.984670 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 11 00:16:46.985767 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 11 00:16:46.996555 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 11 00:16:46.997954 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 11 00:16:47.006117 kernel: BTRFS info (device vda6): first mount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:16:47.006164 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:16:47.006175 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:16:47.009441 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:16:47.017058 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 11 00:16:47.018418 kernel: BTRFS info (device vda6): last unmount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:16:47.024282 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 11 00:16:47.032630 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 11 00:16:47.116573 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:16:47.126598 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:16:47.153530 systemd-networkd[764]: lo: Link UP Jul 11 00:16:47.153550 systemd-networkd[764]: lo: Gained carrier Jul 11 00:16:47.154231 systemd-networkd[764]: Enumeration completed Jul 11 00:16:47.154353 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:16:47.154766 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:16:47.154769 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:16:47.155630 systemd[1]: Reached target network.target - Network. Jul 11 00:16:47.155672 systemd-networkd[764]: eth0: Link UP Jul 11 00:16:47.155676 systemd-networkd[764]: eth0: Gained carrier Jul 11 00:16:47.155683 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:16:47.166277 ignition[662]: Ignition 2.19.0 Jul 11 00:16:47.166284 ignition[662]: Stage: fetch-offline Jul 11 00:16:47.166320 ignition[662]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:16:47.166328 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:16:47.166495 ignition[662]: parsed url from cmdline: "" Jul 11 00:16:47.166498 ignition[662]: no config URL provided Jul 11 00:16:47.166503 ignition[662]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 00:16:47.166512 ignition[662]: no config at "/usr/lib/ignition/user.ign" Jul 11 00:16:47.166537 ignition[662]: op(1): [started] loading QEMU firmware config module Jul 11 00:16:47.166549 ignition[662]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 11 00:16:47.174269 ignition[662]: op(1): [finished] loading QEMU firmware config module Jul 11 00:16:47.176453 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.77/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:16:47.213710 ignition[662]: parsing config with SHA512: 8acc4c80dc2d30d995fb38d37c9e99fd330bde3e58a8ab72c6ccf582dafb16ac30b810e4a454bdf5551992ddcd3e4a65be818b3c5b30a4afa2c2bbe88d63ee6d Jul 11 00:16:47.217679 unknown[662]: fetched base config from "system" Jul 11 00:16:47.217689 unknown[662]: fetched user config from "qemu" Jul 11 00:16:47.218098 ignition[662]: fetch-offline: fetch-offline passed Jul 11 00:16:47.218158 ignition[662]: Ignition finished successfully Jul 11 00:16:47.219905 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:16:47.221654 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 11 00:16:47.232667 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 11 00:16:47.242825 ignition[771]: Ignition 2.19.0 Jul 11 00:16:47.242835 ignition[771]: Stage: kargs Jul 11 00:16:47.243005 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:16:47.243014 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:16:47.243947 ignition[771]: kargs: kargs passed Jul 11 00:16:47.243994 ignition[771]: Ignition finished successfully Jul 11 00:16:47.247334 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 11 00:16:47.264770 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 11 00:16:47.274253 ignition[779]: Ignition 2.19.0 Jul 11 00:16:47.274263 ignition[779]: Stage: disks Jul 11 00:16:47.274446 ignition[779]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:16:47.274456 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:16:47.275310 ignition[779]: disks: disks passed Jul 11 00:16:47.276668 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 11 00:16:47.275355 ignition[779]: Ignition finished successfully Jul 11 00:16:47.277962 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 11 00:16:47.279227 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 11 00:16:47.280590 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:16:47.281895 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:16:47.283280 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:16:47.296578 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 11 00:16:47.307022 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 11 00:16:47.310906 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 11 00:16:47.312809 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 11 00:16:47.361217 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 11 00:16:47.362719 kernel: EXT4-fs (vda9): mounted filesystem a2a437d1-0a8e-46b9-88bf-4a47ff29fe90 r/w with ordered data mode. Quota mode: none. Jul 11 00:16:47.362515 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 11 00:16:47.374495 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:16:47.376644 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 11 00:16:47.377627 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 11 00:16:47.377667 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 11 00:16:47.377690 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:16:47.383113 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 11 00:16:47.385312 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 11 00:16:47.389171 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (799) Jul 11 00:16:47.389203 kernel: BTRFS info (device vda6): first mount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:16:47.389215 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:16:47.389224 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:16:47.391416 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:16:47.402398 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:16:47.440239 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Jul 11 00:16:47.444606 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Jul 11 00:16:47.448627 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Jul 11 00:16:47.451694 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Jul 11 00:16:47.524014 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 11 00:16:47.539581 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 11 00:16:47.540982 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 11 00:16:47.545445 kernel: BTRFS info (device vda6): last unmount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:16:47.563139 ignition[912]: INFO : Ignition 2.19.0 Jul 11 00:16:47.563985 ignition[912]: INFO : Stage: mount Jul 11 00:16:47.563985 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:16:47.563985 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:16:47.563955 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 11 00:16:47.567916 ignition[912]: INFO : mount: mount passed Jul 11 00:16:47.567916 ignition[912]: INFO : Ignition finished successfully Jul 11 00:16:47.566464 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 11 00:16:47.585559 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 11 00:16:47.978790 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 11 00:16:47.993632 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:16:47.999899 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (928) Jul 11 00:16:47.999929 kernel: BTRFS info (device vda6): first mount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:16:47.999940 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:16:48.000579 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:16:48.003419 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:16:48.004200 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:16:48.028167 ignition[945]: INFO : Ignition 2.19.0 Jul 11 00:16:48.028167 ignition[945]: INFO : Stage: files Jul 11 00:16:48.029450 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:16:48.029450 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:16:48.029450 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Jul 11 00:16:48.032197 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 11 00:16:48.032197 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 11 00:16:48.034249 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 11 00:16:48.034249 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 11 00:16:48.034249 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 11 00:16:48.034249 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 11 00:16:48.034249 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 11 00:16:48.032785 unknown[945]: wrote ssh authorized keys file for user: core Jul 11 00:16:48.077598 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 11 00:16:48.323544 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 11 00:16:48.323544 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 11 00:16:48.326508 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 11 00:16:48.326508 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:16:48.326508 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:16:48.326508 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:16:48.326508 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:16:48.326508 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:16:48.326508 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:16:48.326508 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:16:48.326508 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:16:48.326508 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 11 00:16:48.326508 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 11 00:16:48.326508 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 11 00:16:48.326508 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 11 00:16:48.524598 systemd-networkd[764]: eth0: Gained IPv6LL Jul 11 00:16:48.693113 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 11 00:16:49.221679 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 11 00:16:49.221679 ignition[945]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 11 00:16:49.224853 ignition[945]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:16:49.224853 ignition[945]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:16:49.224853 ignition[945]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 11 00:16:49.224853 ignition[945]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 11 00:16:49.224853 ignition[945]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:16:49.224853 ignition[945]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:16:49.224853 ignition[945]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 11 00:16:49.224853 ignition[945]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 11 00:16:49.243218 ignition[945]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:16:49.247241 ignition[945]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:16:49.248377 ignition[945]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 11 00:16:49.248377 ignition[945]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 11 00:16:49.248377 ignition[945]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 11 00:16:49.248377 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:16:49.248377 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:16:49.248377 ignition[945]: INFO : files: files passed Jul 11 00:16:49.248377 ignition[945]: INFO : Ignition finished successfully Jul 11 00:16:49.249654 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 11 00:16:49.261614 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 11 00:16:49.264594 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 11 00:16:49.267684 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 11 00:16:49.268463 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 11 00:16:49.271715 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Jul 11 00:16:49.274622 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:16:49.274622 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:16:49.277604 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:16:49.277526 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:16:49.278615 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 11 00:16:49.289710 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 11 00:16:49.310320 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 11 00:16:49.310459 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 11 00:16:49.312288 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 11 00:16:49.313839 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 11 00:16:49.315328 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 11 00:16:49.316111 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 11 00:16:49.331888 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:16:49.334052 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 11 00:16:49.345718 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:16:49.346695 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:16:49.348337 systemd[1]: Stopped target timers.target - Timer Units. Jul 11 00:16:49.349844 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 11 00:16:49.349968 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:16:49.352099 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 11 00:16:49.353874 systemd[1]: Stopped target basic.target - Basic System. Jul 11 00:16:49.355240 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 11 00:16:49.356632 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:16:49.358273 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 11 00:16:49.359943 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 11 00:16:49.361433 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:16:49.363055 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 11 00:16:49.364695 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 11 00:16:49.366135 systemd[1]: Stopped target swap.target - Swaps. Jul 11 00:16:49.367398 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 11 00:16:49.367542 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:16:49.369563 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:16:49.371193 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:16:49.372790 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 11 00:16:49.373481 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:16:49.374498 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 11 00:16:49.374622 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 11 00:16:49.377121 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 11 00:16:49.377233 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:16:49.378843 systemd[1]: Stopped target paths.target - Path Units. Jul 11 00:16:49.380153 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 11 00:16:49.383456 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:16:49.384693 systemd[1]: Stopped target slices.target - Slice Units. Jul 11 00:16:49.386446 systemd[1]: Stopped target sockets.target - Socket Units. Jul 11 00:16:49.387786 systemd[1]: iscsid.socket: Deactivated successfully. Jul 11 00:16:49.387874 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:16:49.389141 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 11 00:16:49.389219 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:16:49.390480 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 11 00:16:49.390596 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:16:49.392050 systemd[1]: ignition-files.service: Deactivated successfully. Jul 11 00:16:49.392145 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 11 00:16:49.408633 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 11 00:16:49.410244 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 11 00:16:49.410912 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 11 00:16:49.411027 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:16:49.412292 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 11 00:16:49.412381 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:16:49.416736 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 11 00:16:49.417448 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 11 00:16:49.423020 ignition[1000]: INFO : Ignition 2.19.0 Jul 11 00:16:49.423020 ignition[1000]: INFO : Stage: umount Jul 11 00:16:49.425574 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:16:49.425574 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:16:49.425574 ignition[1000]: INFO : umount: umount passed Jul 11 00:16:49.425574 ignition[1000]: INFO : Ignition finished successfully Jul 11 00:16:49.425272 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 11 00:16:49.427063 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 11 00:16:49.427192 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 11 00:16:49.431223 systemd[1]: Stopped target network.target - Network. Jul 11 00:16:49.433055 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 11 00:16:49.433118 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 11 00:16:49.434298 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 11 00:16:49.434338 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 11 00:16:49.435496 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 11 00:16:49.435542 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 11 00:16:49.436661 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 11 00:16:49.436697 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 11 00:16:49.438174 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 11 00:16:49.439396 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 11 00:16:49.449341 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 11 00:16:49.449469 systemd-networkd[764]: eth0: DHCPv6 lease lost Jul 11 00:16:49.450594 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 11 00:16:49.452264 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 11 00:16:49.452385 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 11 00:16:49.454525 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 11 00:16:49.454613 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:16:49.462572 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 11 00:16:49.463205 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 11 00:16:49.463265 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:16:49.464665 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:16:49.464705 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:16:49.465982 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 11 00:16:49.466022 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 11 00:16:49.467781 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 11 00:16:49.467827 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:16:49.469364 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:16:49.478683 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 11 00:16:49.478804 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 11 00:16:49.481964 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 11 00:16:49.482073 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 11 00:16:49.483424 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 11 00:16:49.483467 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 11 00:16:49.491112 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 11 00:16:49.491252 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:16:49.492977 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 11 00:16:49.493014 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 11 00:16:49.494429 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 11 00:16:49.494458 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:16:49.496002 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 11 00:16:49.496044 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:16:49.498313 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 11 00:16:49.498353 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 11 00:16:49.500741 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:16:49.500782 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:16:49.514552 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 11 00:16:49.515287 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 11 00:16:49.515336 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:16:49.517161 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 11 00:16:49.517200 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:16:49.518807 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 11 00:16:49.518842 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:16:49.520610 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:16:49.520647 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:16:49.522514 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 11 00:16:49.522606 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 11 00:16:49.525248 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 11 00:16:49.527912 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 11 00:16:49.538350 systemd[1]: Switching root. Jul 11 00:16:49.565356 systemd-journald[236]: Journal stopped Jul 11 00:16:50.301328 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Jul 11 00:16:50.303162 kernel: SELinux: policy capability network_peer_controls=1 Jul 11 00:16:50.303184 kernel: SELinux: policy capability open_perms=1 Jul 11 00:16:50.303194 kernel: SELinux: policy capability extended_socket_class=1 Jul 11 00:16:50.303204 kernel: SELinux: policy capability always_check_network=0 Jul 11 00:16:50.303214 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 11 00:16:50.303224 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 11 00:16:50.303234 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 11 00:16:50.303243 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 11 00:16:50.303252 kernel: audit: type=1403 audit(1752193009.759:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 11 00:16:50.303265 systemd[1]: Successfully loaded SELinux policy in 29.854ms. Jul 11 00:16:50.303282 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.924ms. Jul 11 00:16:50.303294 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 11 00:16:50.303305 systemd[1]: Detected virtualization kvm. Jul 11 00:16:50.303316 systemd[1]: Detected architecture arm64. Jul 11 00:16:50.303327 systemd[1]: Detected first boot. Jul 11 00:16:50.303337 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:16:50.303348 zram_generator::config[1046]: No configuration found. Jul 11 00:16:50.303361 systemd[1]: Populated /etc with preset unit settings. Jul 11 00:16:50.303372 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 11 00:16:50.303382 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 11 00:16:50.303393 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 11 00:16:50.303412 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 11 00:16:50.303424 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 11 00:16:50.303434 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 11 00:16:50.303444 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 11 00:16:50.303454 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 11 00:16:50.303467 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 11 00:16:50.303478 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 11 00:16:50.303491 systemd[1]: Created slice user.slice - User and Session Slice. Jul 11 00:16:50.303501 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:16:50.303512 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:16:50.303522 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 11 00:16:50.303539 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 11 00:16:50.303553 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 11 00:16:50.303565 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:16:50.303576 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 11 00:16:50.303587 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:16:50.303597 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 11 00:16:50.303607 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 11 00:16:50.303617 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 11 00:16:50.303628 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 11 00:16:50.303638 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:16:50.303650 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:16:50.303661 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:16:50.303671 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:16:50.303682 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 11 00:16:50.303692 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 11 00:16:50.303703 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:16:50.303713 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:16:50.303725 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:16:50.303736 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 11 00:16:50.303749 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 11 00:16:50.303760 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 11 00:16:50.303771 systemd[1]: Mounting media.mount - External Media Directory... Jul 11 00:16:50.303781 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 11 00:16:50.303805 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 11 00:16:50.303815 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 11 00:16:50.303827 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 11 00:16:50.303838 systemd[1]: Reached target machines.target - Containers. Jul 11 00:16:50.303848 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 11 00:16:50.303860 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:16:50.303871 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:16:50.303882 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 11 00:16:50.303892 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:16:50.303903 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:16:50.303913 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:16:50.303924 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 11 00:16:50.303934 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:16:50.303947 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 11 00:16:50.303958 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 11 00:16:50.303969 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 11 00:16:50.303980 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 11 00:16:50.303991 systemd[1]: Stopped systemd-fsck-usr.service. Jul 11 00:16:50.304001 kernel: fuse: init (API version 7.39) Jul 11 00:16:50.304010 kernel: loop: module loaded Jul 11 00:16:50.304020 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:16:50.304031 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:16:50.304043 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 00:16:50.304056 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 11 00:16:50.304098 systemd-journald[1113]: Collecting audit messages is disabled. Jul 11 00:16:50.304119 kernel: ACPI: bus type drm_connector registered Jul 11 00:16:50.304129 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:16:50.304140 systemd[1]: verity-setup.service: Deactivated successfully. Jul 11 00:16:50.304150 systemd[1]: Stopped verity-setup.service. Jul 11 00:16:50.304162 systemd-journald[1113]: Journal started Jul 11 00:16:50.304199 systemd-journald[1113]: Runtime Journal (/run/log/journal/b94a6f3f24264b418e84ad34482f86e1) is 5.9M, max 47.3M, 41.4M free. Jul 11 00:16:50.119165 systemd[1]: Queued start job for default target multi-user.target. Jul 11 00:16:50.138074 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 11 00:16:50.138473 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 11 00:16:50.308199 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:16:50.313742 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 11 00:16:50.315122 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 11 00:16:50.316319 systemd[1]: Mounted media.mount - External Media Directory. Jul 11 00:16:50.317161 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 11 00:16:50.318083 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 11 00:16:50.319012 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 11 00:16:50.321426 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 11 00:16:50.322906 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:16:50.324079 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 11 00:16:50.324253 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 11 00:16:50.325523 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:16:50.325691 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:16:50.326754 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:16:50.326896 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:16:50.327895 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:16:50.328032 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:16:50.329148 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 11 00:16:50.329288 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 11 00:16:50.331606 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:16:50.331771 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:16:50.333479 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:16:50.334857 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 00:16:50.336044 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 11 00:16:50.347830 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 00:16:50.360538 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 11 00:16:50.362738 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 11 00:16:50.363569 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 11 00:16:50.363600 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:16:50.365232 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 11 00:16:50.367241 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 11 00:16:50.369649 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 11 00:16:50.370590 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:16:50.372312 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 11 00:16:50.374030 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 11 00:16:50.374961 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:16:50.376065 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 11 00:16:50.377163 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:16:50.378551 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:16:50.381610 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 11 00:16:50.387979 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 00:16:50.390707 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:16:50.392715 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 11 00:16:50.395672 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 11 00:16:50.395825 systemd-journald[1113]: Time spent on flushing to /var/log/journal/b94a6f3f24264b418e84ad34482f86e1 is 25.785ms for 860 entries. Jul 11 00:16:50.395825 systemd-journald[1113]: System Journal (/var/log/journal/b94a6f3f24264b418e84ad34482f86e1) is 8.0M, max 195.6M, 187.6M free. Jul 11 00:16:50.433006 systemd-journald[1113]: Received client request to flush runtime journal. Jul 11 00:16:50.433064 kernel: loop0: detected capacity change from 0 to 207008 Jul 11 00:16:50.433085 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 11 00:16:50.397468 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 11 00:16:50.398902 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 11 00:16:50.402552 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 11 00:16:50.411718 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 11 00:16:50.417624 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 11 00:16:50.428475 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:16:50.436122 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 11 00:16:50.438499 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 11 00:16:50.442731 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Jul 11 00:16:50.442750 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Jul 11 00:16:50.447581 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:16:50.454959 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 11 00:16:50.458451 kernel: loop1: detected capacity change from 0 to 114432 Jul 11 00:16:50.472885 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 11 00:16:50.473537 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 11 00:16:50.480429 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 11 00:16:50.482477 kernel: loop2: detected capacity change from 0 to 114328 Jul 11 00:16:50.490675 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:16:50.505071 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jul 11 00:16:50.505089 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jul 11 00:16:50.509507 kernel: loop3: detected capacity change from 0 to 207008 Jul 11 00:16:50.510380 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:16:50.517444 kernel: loop4: detected capacity change from 0 to 114432 Jul 11 00:16:50.522439 kernel: loop5: detected capacity change from 0 to 114328 Jul 11 00:16:50.525486 (sd-merge)[1184]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 11 00:16:50.525959 (sd-merge)[1184]: Merged extensions into '/usr'. Jul 11 00:16:50.529663 systemd[1]: Reloading requested from client PID 1157 ('systemd-sysext') (unit systemd-sysext.service)... Jul 11 00:16:50.529812 systemd[1]: Reloading... Jul 11 00:16:50.584636 zram_generator::config[1208]: No configuration found. Jul 11 00:16:50.672859 ldconfig[1152]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 11 00:16:50.688821 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:16:50.725789 systemd[1]: Reloading finished in 195 ms. Jul 11 00:16:50.757601 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 11 00:16:50.758931 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 11 00:16:50.772607 systemd[1]: Starting ensure-sysext.service... Jul 11 00:16:50.775067 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:16:50.784164 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Jul 11 00:16:50.784178 systemd[1]: Reloading... Jul 11 00:16:50.793737 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 11 00:16:50.794004 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 11 00:16:50.794666 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 11 00:16:50.794890 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jul 11 00:16:50.794947 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jul 11 00:16:50.797073 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:16:50.797086 systemd-tmpfiles[1246]: Skipping /boot Jul 11 00:16:50.804201 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:16:50.804218 systemd-tmpfiles[1246]: Skipping /boot Jul 11 00:16:50.829436 zram_generator::config[1273]: No configuration found. Jul 11 00:16:50.914489 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:16:50.950380 systemd[1]: Reloading finished in 165 ms. Jul 11 00:16:50.965470 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 11 00:16:50.978912 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:16:50.986579 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 11 00:16:50.989118 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 11 00:16:50.991355 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 11 00:16:50.995689 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:16:51.003724 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:16:51.006064 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 11 00:16:51.010100 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:16:51.013655 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:16:51.017224 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:16:51.019350 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:16:51.020627 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:16:51.025920 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 11 00:16:51.027779 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:16:51.027942 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:16:51.031165 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:16:51.031297 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:16:51.035670 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 11 00:16:51.039464 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:16:51.039609 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:16:51.050334 systemd-udevd[1318]: Using default interface naming scheme 'v255'. Jul 11 00:16:51.052807 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 11 00:16:51.054869 augenrules[1337]: No rules Jul 11 00:16:51.056349 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 11 00:16:51.058334 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 11 00:16:51.063875 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:16:51.073677 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:16:51.077448 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:16:51.081171 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:16:51.083991 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:16:51.086682 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:16:51.088731 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 11 00:16:51.090623 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:16:51.092635 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:16:51.094223 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 11 00:16:51.097224 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:16:51.097396 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:16:51.101210 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:16:51.101344 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:16:51.104009 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:16:51.104562 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:16:51.110132 systemd[1]: Finished ensure-sysext.service. Jul 11 00:16:51.122416 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1374) Jul 11 00:16:51.123824 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:16:51.124027 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:16:51.129784 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 11 00:16:51.131274 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 11 00:16:51.149630 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:16:51.151773 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:16:51.151853 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:16:51.154281 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 11 00:16:51.157397 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:16:51.159978 systemd-resolved[1314]: Positive Trust Anchors: Jul 11 00:16:51.159993 systemd-resolved[1314]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:16:51.160025 systemd-resolved[1314]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:16:51.163609 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 11 00:16:51.167190 systemd-resolved[1314]: Defaulting to hostname 'linux'. Jul 11 00:16:51.174780 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:16:51.176383 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:16:51.196377 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 11 00:16:51.211262 systemd-networkd[1387]: lo: Link UP Jul 11 00:16:51.211269 systemd-networkd[1387]: lo: Gained carrier Jul 11 00:16:51.212022 systemd-networkd[1387]: Enumeration completed Jul 11 00:16:51.212113 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:16:51.213595 systemd[1]: Reached target network.target - Network. Jul 11 00:16:51.216072 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:16:51.216080 systemd-networkd[1387]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:16:51.216784 systemd-networkd[1387]: eth0: Link UP Jul 11 00:16:51.216791 systemd-networkd[1387]: eth0: Gained carrier Jul 11 00:16:51.216803 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:16:51.221593 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 11 00:16:51.222797 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 11 00:16:51.224241 systemd[1]: Reached target time-set.target - System Time Set. Jul 11 00:16:51.230829 systemd-networkd[1387]: eth0: DHCPv4 address 10.0.0.77/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:16:51.231488 systemd-timesyncd[1388]: Network configuration changed, trying to establish connection. Jul 11 00:16:51.233337 systemd-timesyncd[1388]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 11 00:16:51.233561 systemd-timesyncd[1388]: Initial clock synchronization to Fri 2025-07-11 00:16:51.537181 UTC. Jul 11 00:16:51.240683 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:16:51.254918 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 11 00:16:51.265600 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 11 00:16:51.277157 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:16:51.277986 lvm[1401]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:16:51.308011 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 11 00:16:51.309521 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:16:51.310303 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:16:51.311149 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 11 00:16:51.312050 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 11 00:16:51.313094 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 11 00:16:51.313953 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 11 00:16:51.314846 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 11 00:16:51.315704 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 11 00:16:51.315739 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:16:51.316347 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:16:51.317854 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 11 00:16:51.320082 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 11 00:16:51.334475 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 11 00:16:51.336482 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 11 00:16:51.337732 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 11 00:16:51.338716 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:16:51.339377 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:16:51.340070 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:16:51.340102 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:16:51.341091 systemd[1]: Starting containerd.service - containerd container runtime... Jul 11 00:16:51.342850 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 11 00:16:51.344152 lvm[1408]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:16:51.346549 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 11 00:16:51.349931 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 11 00:16:51.351042 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 11 00:16:51.355522 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 11 00:16:51.359345 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 11 00:16:51.362732 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 11 00:16:51.364474 jq[1411]: false Jul 11 00:16:51.365073 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 11 00:16:51.369665 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 11 00:16:51.370373 dbus-daemon[1410]: [system] SELinux support is enabled Jul 11 00:16:51.374898 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 11 00:16:51.375333 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 11 00:16:51.376011 systemd[1]: Starting update-engine.service - Update Engine... Jul 11 00:16:51.380609 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 11 00:16:51.381989 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 11 00:16:51.387449 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 11 00:16:51.389851 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 11 00:16:51.390011 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 11 00:16:51.391610 jq[1427]: true Jul 11 00:16:51.399096 systemd[1]: motdgen.service: Deactivated successfully. Jul 11 00:16:51.400447 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 11 00:16:51.408334 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 11 00:16:51.408389 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 11 00:16:51.410457 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 11 00:16:51.410483 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 11 00:16:51.412329 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 11 00:16:51.412620 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 11 00:16:51.421008 extend-filesystems[1412]: Found loop3 Jul 11 00:16:51.423353 extend-filesystems[1412]: Found loop4 Jul 11 00:16:51.423353 extend-filesystems[1412]: Found loop5 Jul 11 00:16:51.423353 extend-filesystems[1412]: Found vda Jul 11 00:16:51.423353 extend-filesystems[1412]: Found vda1 Jul 11 00:16:51.423353 extend-filesystems[1412]: Found vda2 Jul 11 00:16:51.423353 extend-filesystems[1412]: Found vda3 Jul 11 00:16:51.423353 extend-filesystems[1412]: Found usr Jul 11 00:16:51.423353 extend-filesystems[1412]: Found vda4 Jul 11 00:16:51.423353 extend-filesystems[1412]: Found vda6 Jul 11 00:16:51.423353 extend-filesystems[1412]: Found vda7 Jul 11 00:16:51.423353 extend-filesystems[1412]: Found vda9 Jul 11 00:16:51.423353 extend-filesystems[1412]: Checking size of /dev/vda9 Jul 11 00:16:51.433442 (ntainerd)[1439]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 11 00:16:51.456949 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 11 00:16:51.456997 jq[1431]: true Jul 11 00:16:51.457167 extend-filesystems[1412]: Resized partition /dev/vda9 Jul 11 00:16:51.460293 update_engine[1426]: I20250711 00:16:51.451931 1426 main.cc:92] Flatcar Update Engine starting Jul 11 00:16:51.462805 tar[1430]: linux-arm64/LICENSE Jul 11 00:16:51.462805 tar[1430]: linux-arm64/helm Jul 11 00:16:51.460031 systemd[1]: Started update-engine.service - Update Engine. Jul 11 00:16:51.466999 extend-filesystems[1447]: resize2fs 1.47.1 (20-May-2024) Jul 11 00:16:51.494816 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1359) Jul 11 00:16:51.494890 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 11 00:16:51.494910 update_engine[1426]: I20250711 00:16:51.460496 1426 update_check_scheduler.cc:74] Next update check in 2m40s Jul 11 00:16:51.460366 systemd-logind[1419]: Watching system buttons on /dev/input/event0 (Power Button) Jul 11 00:16:51.460568 systemd-logind[1419]: New seat seat0. Jul 11 00:16:51.473653 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 11 00:16:51.497631 extend-filesystems[1447]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 11 00:16:51.497631 extend-filesystems[1447]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 11 00:16:51.497631 extend-filesystems[1447]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 11 00:16:51.475088 systemd[1]: Started systemd-logind.service - User Login Management. Jul 11 00:16:51.507163 extend-filesystems[1412]: Resized filesystem in /dev/vda9 Jul 11 00:16:51.501518 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 11 00:16:51.501705 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 11 00:16:51.531860 bash[1464]: Updated "/home/core/.ssh/authorized_keys" Jul 11 00:16:51.540848 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 11 00:16:51.544693 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 11 00:16:51.665481 locksmithd[1450]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 11 00:16:51.687128 containerd[1439]: time="2025-07-11T00:16:51.687020840Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 11 00:16:51.721629 containerd[1439]: time="2025-07-11T00:16:51.721266720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:16:51.723906 containerd[1439]: time="2025-07-11T00:16:51.722823120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:16:51.723906 containerd[1439]: time="2025-07-11T00:16:51.722859040Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 11 00:16:51.723906 containerd[1439]: time="2025-07-11T00:16:51.722875320Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 11 00:16:51.723906 containerd[1439]: time="2025-07-11T00:16:51.723031400Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 11 00:16:51.723906 containerd[1439]: time="2025-07-11T00:16:51.723054880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 11 00:16:51.723906 containerd[1439]: time="2025-07-11T00:16:51.723112000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:16:51.723906 containerd[1439]: time="2025-07-11T00:16:51.723125640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:16:51.723906 containerd[1439]: time="2025-07-11T00:16:51.723285800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:16:51.723906 containerd[1439]: time="2025-07-11T00:16:51.723302680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 11 00:16:51.723906 containerd[1439]: time="2025-07-11T00:16:51.723315280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:16:51.723906 containerd[1439]: time="2025-07-11T00:16:51.723326320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 11 00:16:51.724153 containerd[1439]: time="2025-07-11T00:16:51.723394040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:16:51.724153 containerd[1439]: time="2025-07-11T00:16:51.723615040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:16:51.724153 containerd[1439]: time="2025-07-11T00:16:51.723710720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:16:51.724153 containerd[1439]: time="2025-07-11T00:16:51.723724480Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 11 00:16:51.724153 containerd[1439]: time="2025-07-11T00:16:51.723793680Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 11 00:16:51.724153 containerd[1439]: time="2025-07-11T00:16:51.723831480Z" level=info msg="metadata content store policy set" policy=shared Jul 11 00:16:51.727851 containerd[1439]: time="2025-07-11T00:16:51.727823720Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 11 00:16:51.727935 containerd[1439]: time="2025-07-11T00:16:51.727878280Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 11 00:16:51.727935 containerd[1439]: time="2025-07-11T00:16:51.727897880Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 11 00:16:51.727935 containerd[1439]: time="2025-07-11T00:16:51.727918760Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 11 00:16:51.727987 containerd[1439]: time="2025-07-11T00:16:51.727935360Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 11 00:16:51.728101 containerd[1439]: time="2025-07-11T00:16:51.728077360Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 11 00:16:51.728344 containerd[1439]: time="2025-07-11T00:16:51.728328960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 11 00:16:51.728482 containerd[1439]: time="2025-07-11T00:16:51.728464920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 11 00:16:51.728519 containerd[1439]: time="2025-07-11T00:16:51.728487480Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 11 00:16:51.728519 containerd[1439]: time="2025-07-11T00:16:51.728500640Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 11 00:16:51.728519 containerd[1439]: time="2025-07-11T00:16:51.728513760Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 11 00:16:51.728581 containerd[1439]: time="2025-07-11T00:16:51.728540960Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 11 00:16:51.728581 containerd[1439]: time="2025-07-11T00:16:51.728555960Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 11 00:16:51.728581 containerd[1439]: time="2025-07-11T00:16:51.728570880Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 11 00:16:51.728642 containerd[1439]: time="2025-07-11T00:16:51.728593200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 11 00:16:51.728642 containerd[1439]: time="2025-07-11T00:16:51.728606200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 11 00:16:51.728642 containerd[1439]: time="2025-07-11T00:16:51.728618360Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 11 00:16:51.728642 containerd[1439]: time="2025-07-11T00:16:51.728633080Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 11 00:16:51.728715 containerd[1439]: time="2025-07-11T00:16:51.728653800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.728715 containerd[1439]: time="2025-07-11T00:16:51.728668360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.728715 containerd[1439]: time="2025-07-11T00:16:51.728681880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.728715 containerd[1439]: time="2025-07-11T00:16:51.728697440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.728715 containerd[1439]: time="2025-07-11T00:16:51.728711320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.728799 containerd[1439]: time="2025-07-11T00:16:51.728724360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.728799 containerd[1439]: time="2025-07-11T00:16:51.728735800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.728799 containerd[1439]: time="2025-07-11T00:16:51.728750960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.728799 containerd[1439]: time="2025-07-11T00:16:51.728768320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.728799 containerd[1439]: time="2025-07-11T00:16:51.728782080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.728799 containerd[1439]: time="2025-07-11T00:16:51.728793320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.728898 containerd[1439]: time="2025-07-11T00:16:51.728805840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.728898 containerd[1439]: time="2025-07-11T00:16:51.728819560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.728898 containerd[1439]: time="2025-07-11T00:16:51.728834800Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 11 00:16:51.728898 containerd[1439]: time="2025-07-11T00:16:51.728855360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.728898 containerd[1439]: time="2025-07-11T00:16:51.728867840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.728898 containerd[1439]: time="2025-07-11T00:16:51.728878800Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 11 00:16:51.729008 containerd[1439]: time="2025-07-11T00:16:51.728981280Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 11 00:16:51.729029 containerd[1439]: time="2025-07-11T00:16:51.729004240Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 11 00:16:51.729029 containerd[1439]: time="2025-07-11T00:16:51.729015840Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 11 00:16:51.729068 containerd[1439]: time="2025-07-11T00:16:51.729027720Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 11 00:16:51.729068 containerd[1439]: time="2025-07-11T00:16:51.729037040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.729068 containerd[1439]: time="2025-07-11T00:16:51.729052560Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 11 00:16:51.729068 containerd[1439]: time="2025-07-11T00:16:51.729063040Z" level=info msg="NRI interface is disabled by configuration." Jul 11 00:16:51.729132 containerd[1439]: time="2025-07-11T00:16:51.729073240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 11 00:16:51.729739 containerd[1439]: time="2025-07-11T00:16:51.729621120Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 11 00:16:51.729846 containerd[1439]: time="2025-07-11T00:16:51.729753680Z" level=info msg="Connect containerd service" Jul 11 00:16:51.729882 containerd[1439]: time="2025-07-11T00:16:51.729869200Z" level=info msg="using legacy CRI server" Jul 11 00:16:51.729903 containerd[1439]: time="2025-07-11T00:16:51.729881000Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 11 00:16:51.730131 containerd[1439]: time="2025-07-11T00:16:51.730112480Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 11 00:16:51.731136 containerd[1439]: time="2025-07-11T00:16:51.731095120Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:16:51.732269 containerd[1439]: time="2025-07-11T00:16:51.731668080Z" level=info msg="Start subscribing containerd event" Jul 11 00:16:51.732269 containerd[1439]: time="2025-07-11T00:16:51.731714480Z" level=info msg="Start recovering state" Jul 11 00:16:51.732269 containerd[1439]: time="2025-07-11T00:16:51.731809120Z" level=info msg="Start event monitor" Jul 11 00:16:51.732269 containerd[1439]: time="2025-07-11T00:16:51.731822720Z" level=info msg="Start snapshots syncer" Jul 11 00:16:51.732269 containerd[1439]: time="2025-07-11T00:16:51.731832760Z" level=info msg="Start cni network conf syncer for default" Jul 11 00:16:51.732269 containerd[1439]: time="2025-07-11T00:16:51.731845280Z" level=info msg="Start streaming server" Jul 11 00:16:51.732269 containerd[1439]: time="2025-07-11T00:16:51.732188880Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 11 00:16:51.732269 containerd[1439]: time="2025-07-11T00:16:51.732233560Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 11 00:16:51.732580 containerd[1439]: time="2025-07-11T00:16:51.732558760Z" level=info msg="containerd successfully booted in 0.046406s" Jul 11 00:16:51.732650 systemd[1]: Started containerd.service - containerd container runtime. Jul 11 00:16:51.848092 sshd_keygen[1428]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 11 00:16:51.872330 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 11 00:16:51.885717 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 11 00:16:51.893788 systemd[1]: issuegen.service: Deactivated successfully. Jul 11 00:16:51.895489 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 11 00:16:51.896773 tar[1430]: linux-arm64/README.md Jul 11 00:16:51.909608 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 11 00:16:51.912542 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 11 00:16:51.923136 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 11 00:16:51.938793 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 11 00:16:51.941068 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 11 00:16:51.942216 systemd[1]: Reached target getty.target - Login Prompts. Jul 11 00:16:52.813187 systemd-networkd[1387]: eth0: Gained IPv6LL Jul 11 00:16:52.816271 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 11 00:16:52.817943 systemd[1]: Reached target network-online.target - Network is Online. Jul 11 00:16:52.830792 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 11 00:16:52.833792 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:16:52.835812 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 11 00:16:52.853025 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 11 00:16:52.853239 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 11 00:16:52.855121 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 11 00:16:52.861777 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 11 00:16:53.440627 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:16:53.441965 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 11 00:16:53.443405 systemd[1]: Startup finished in 624ms (kernel) + 5.023s (initrd) + 3.717s (userspace) = 9.365s. Jul 11 00:16:53.446130 (kubelet)[1522]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:16:53.985102 kubelet[1522]: E0711 00:16:53.985002 1522 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:16:53.987551 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:16:53.987710 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:16:58.132292 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 11 00:16:58.134784 systemd[1]: Started sshd@0-10.0.0.77:22-10.0.0.1:57218.service - OpenSSH per-connection server daemon (10.0.0.1:57218). Jul 11 00:16:58.207681 sshd[1535]: Accepted publickey for core from 10.0.0.1 port 57218 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:16:58.211744 sshd[1535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:16:58.221293 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 11 00:16:58.230679 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 11 00:16:58.232244 systemd-logind[1419]: New session 1 of user core. Jul 11 00:16:58.241322 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 11 00:16:58.252699 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 11 00:16:58.255086 (systemd)[1539]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:16:58.335632 systemd[1539]: Queued start job for default target default.target. Jul 11 00:16:58.347449 systemd[1539]: Created slice app.slice - User Application Slice. Jul 11 00:16:58.347476 systemd[1539]: Reached target paths.target - Paths. Jul 11 00:16:58.347489 systemd[1539]: Reached target timers.target - Timers. Jul 11 00:16:58.348849 systemd[1539]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 11 00:16:58.359124 systemd[1539]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 11 00:16:58.359197 systemd[1539]: Reached target sockets.target - Sockets. Jul 11 00:16:58.359209 systemd[1539]: Reached target basic.target - Basic System. Jul 11 00:16:58.359246 systemd[1539]: Reached target default.target - Main User Target. Jul 11 00:16:58.359277 systemd[1539]: Startup finished in 98ms. Jul 11 00:16:58.359504 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 11 00:16:58.360787 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 11 00:16:58.421819 systemd[1]: Started sshd@1-10.0.0.77:22-10.0.0.1:57224.service - OpenSSH per-connection server daemon (10.0.0.1:57224). Jul 11 00:16:58.461308 sshd[1550]: Accepted publickey for core from 10.0.0.1 port 57224 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:16:58.462402 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:16:58.467073 systemd-logind[1419]: New session 2 of user core. Jul 11 00:16:58.478611 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 11 00:16:58.530683 sshd[1550]: pam_unix(sshd:session): session closed for user core Jul 11 00:16:58.541860 systemd[1]: sshd@1-10.0.0.77:22-10.0.0.1:57224.service: Deactivated successfully. Jul 11 00:16:58.543302 systemd[1]: session-2.scope: Deactivated successfully. Jul 11 00:16:58.544621 systemd-logind[1419]: Session 2 logged out. Waiting for processes to exit. Jul 11 00:16:58.546730 systemd[1]: Started sshd@2-10.0.0.77:22-10.0.0.1:57234.service - OpenSSH per-connection server daemon (10.0.0.1:57234). Jul 11 00:16:58.547623 systemd-logind[1419]: Removed session 2. Jul 11 00:16:58.586508 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 57234 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:16:58.587865 sshd[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:16:58.592183 systemd-logind[1419]: New session 3 of user core. Jul 11 00:16:58.602596 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 11 00:16:58.651475 sshd[1557]: pam_unix(sshd:session): session closed for user core Jul 11 00:16:58.657746 systemd[1]: sshd@2-10.0.0.77:22-10.0.0.1:57234.service: Deactivated successfully. Jul 11 00:16:58.660818 systemd[1]: session-3.scope: Deactivated successfully. Jul 11 00:16:58.662147 systemd-logind[1419]: Session 3 logged out. Waiting for processes to exit. Jul 11 00:16:58.664794 systemd[1]: Started sshd@3-10.0.0.77:22-10.0.0.1:57246.service - OpenSSH per-connection server daemon (10.0.0.1:57246). Jul 11 00:16:58.665663 systemd-logind[1419]: Removed session 3. Jul 11 00:16:58.707256 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 57246 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:16:58.708550 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:16:58.713383 systemd-logind[1419]: New session 4 of user core. Jul 11 00:16:58.726638 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 11 00:16:58.780518 sshd[1564]: pam_unix(sshd:session): session closed for user core Jul 11 00:16:58.794185 systemd[1]: sshd@3-10.0.0.77:22-10.0.0.1:57246.service: Deactivated successfully. Jul 11 00:16:58.797870 systemd[1]: session-4.scope: Deactivated successfully. Jul 11 00:16:58.799164 systemd-logind[1419]: Session 4 logged out. Waiting for processes to exit. Jul 11 00:16:58.800358 systemd[1]: Started sshd@4-10.0.0.77:22-10.0.0.1:57252.service - OpenSSH per-connection server daemon (10.0.0.1:57252). Jul 11 00:16:58.801196 systemd-logind[1419]: Removed session 4. Jul 11 00:16:58.845141 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 57252 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:16:58.846588 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:16:58.850496 systemd-logind[1419]: New session 5 of user core. Jul 11 00:16:58.860634 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 11 00:16:58.920112 sudo[1574]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 11 00:16:58.920393 sudo[1574]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:16:58.946327 sudo[1574]: pam_unix(sudo:session): session closed for user root Jul 11 00:16:58.948209 sshd[1571]: pam_unix(sshd:session): session closed for user core Jul 11 00:16:58.957834 systemd[1]: sshd@4-10.0.0.77:22-10.0.0.1:57252.service: Deactivated successfully. Jul 11 00:16:58.960469 systemd[1]: session-5.scope: Deactivated successfully. Jul 11 00:16:58.961709 systemd-logind[1419]: Session 5 logged out. Waiting for processes to exit. Jul 11 00:16:58.972730 systemd[1]: Started sshd@5-10.0.0.77:22-10.0.0.1:57266.service - OpenSSH per-connection server daemon (10.0.0.1:57266). Jul 11 00:16:58.973599 systemd-logind[1419]: Removed session 5. Jul 11 00:16:59.008211 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 57266 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:16:59.009457 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:16:59.013478 systemd-logind[1419]: New session 6 of user core. Jul 11 00:16:59.023600 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 11 00:16:59.075072 sudo[1583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 11 00:16:59.075363 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:16:59.078436 sudo[1583]: pam_unix(sudo:session): session closed for user root Jul 11 00:16:59.082771 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 11 00:16:59.083027 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:16:59.100687 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 11 00:16:59.101726 auditctl[1586]: No rules Jul 11 00:16:59.102578 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 00:16:59.103500 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 11 00:16:59.105157 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 11 00:16:59.129618 augenrules[1604]: No rules Jul 11 00:16:59.130311 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 11 00:16:59.131319 sudo[1582]: pam_unix(sudo:session): session closed for user root Jul 11 00:16:59.132862 sshd[1579]: pam_unix(sshd:session): session closed for user core Jul 11 00:16:59.145053 systemd[1]: sshd@5-10.0.0.77:22-10.0.0.1:57266.service: Deactivated successfully. Jul 11 00:16:59.146506 systemd[1]: session-6.scope: Deactivated successfully. Jul 11 00:16:59.147800 systemd-logind[1419]: Session 6 logged out. Waiting for processes to exit. Jul 11 00:16:59.158689 systemd[1]: Started sshd@6-10.0.0.77:22-10.0.0.1:57268.service - OpenSSH per-connection server daemon (10.0.0.1:57268). Jul 11 00:16:59.159449 systemd-logind[1419]: Removed session 6. Jul 11 00:16:59.196524 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 57268 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:16:59.198164 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:16:59.201770 systemd-logind[1419]: New session 7 of user core. Jul 11 00:16:59.218576 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 11 00:16:59.271320 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 11 00:16:59.271974 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:16:59.624618 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 11 00:16:59.624767 (dockerd)[1634]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 11 00:16:59.884775 dockerd[1634]: time="2025-07-11T00:16:59.884123394Z" level=info msg="Starting up" Jul 11 00:17:00.033777 dockerd[1634]: time="2025-07-11T00:17:00.033726649Z" level=info msg="Loading containers: start." Jul 11 00:17:00.116451 kernel: Initializing XFRM netlink socket Jul 11 00:17:00.177694 systemd-networkd[1387]: docker0: Link UP Jul 11 00:17:00.196594 dockerd[1634]: time="2025-07-11T00:17:00.196548717Z" level=info msg="Loading containers: done." Jul 11 00:17:00.208861 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2460195566-merged.mount: Deactivated successfully. Jul 11 00:17:00.210397 dockerd[1634]: time="2025-07-11T00:17:00.210193020Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 11 00:17:00.210397 dockerd[1634]: time="2025-07-11T00:17:00.210360012Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 11 00:17:00.210510 dockerd[1634]: time="2025-07-11T00:17:00.210486280Z" level=info msg="Daemon has completed initialization" Jul 11 00:17:00.237972 dockerd[1634]: time="2025-07-11T00:17:00.237845759Z" level=info msg="API listen on /run/docker.sock" Jul 11 00:17:00.238190 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 11 00:17:00.835422 containerd[1439]: time="2025-07-11T00:17:00.835288331Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 11 00:17:01.516659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2774861180.mount: Deactivated successfully. Jul 11 00:17:02.783594 containerd[1439]: time="2025-07-11T00:17:02.783522109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:02.783925 containerd[1439]: time="2025-07-11T00:17:02.783850964Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328196" Jul 11 00:17:02.784790 containerd[1439]: time="2025-07-11T00:17:02.784759154Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:02.787875 containerd[1439]: time="2025-07-11T00:17:02.787843362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:02.788911 containerd[1439]: time="2025-07-11T00:17:02.788875659Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 1.953540762s" Jul 11 00:17:02.788945 containerd[1439]: time="2025-07-11T00:17:02.788918402Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 11 00:17:02.789803 containerd[1439]: time="2025-07-11T00:17:02.789776334Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 11 00:17:04.237967 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 11 00:17:04.247636 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:17:04.352495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:17:04.356852 (kubelet)[1845]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:17:04.407513 kubelet[1845]: E0711 00:17:04.407431 1845 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:17:04.410830 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:17:04.410976 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:17:04.651870 containerd[1439]: time="2025-07-11T00:17:04.651734799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:04.652908 containerd[1439]: time="2025-07-11T00:17:04.652872561Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529230" Jul 11 00:17:04.655749 containerd[1439]: time="2025-07-11T00:17:04.655688108Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:04.670938 containerd[1439]: time="2025-07-11T00:17:04.670823958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:04.672222 containerd[1439]: time="2025-07-11T00:17:04.671892918Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.882084407s" Jul 11 00:17:04.672222 containerd[1439]: time="2025-07-11T00:17:04.672109765Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 11 00:17:04.672874 containerd[1439]: time="2025-07-11T00:17:04.672575462Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 11 00:17:06.102945 containerd[1439]: time="2025-07-11T00:17:06.102880849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:06.104466 containerd[1439]: time="2025-07-11T00:17:06.104380305Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484143" Jul 11 00:17:06.105384 containerd[1439]: time="2025-07-11T00:17:06.105329797Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:06.108709 containerd[1439]: time="2025-07-11T00:17:06.108672189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:06.110739 containerd[1439]: time="2025-07-11T00:17:06.110419690Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.437798117s" Jul 11 00:17:06.110739 containerd[1439]: time="2025-07-11T00:17:06.110461655Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 11 00:17:06.110917 containerd[1439]: time="2025-07-11T00:17:06.110891720Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 11 00:17:07.320497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3176847661.mount: Deactivated successfully. Jul 11 00:17:07.549808 containerd[1439]: time="2025-07-11T00:17:07.549751881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:07.550659 containerd[1439]: time="2025-07-11T00:17:07.550438303Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378408" Jul 11 00:17:07.551401 containerd[1439]: time="2025-07-11T00:17:07.551343360Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:07.553448 containerd[1439]: time="2025-07-11T00:17:07.553360330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:07.554451 containerd[1439]: time="2025-07-11T00:17:07.554373378Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.443447425s" Jul 11 00:17:07.554451 containerd[1439]: time="2025-07-11T00:17:07.554425685Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 11 00:17:07.555019 containerd[1439]: time="2025-07-11T00:17:07.554914579Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 11 00:17:08.195247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount818883251.mount: Deactivated successfully. Jul 11 00:17:09.182558 containerd[1439]: time="2025-07-11T00:17:09.182492705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:09.182973 containerd[1439]: time="2025-07-11T00:17:09.182917486Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 11 00:17:09.183990 containerd[1439]: time="2025-07-11T00:17:09.183949201Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:09.187293 containerd[1439]: time="2025-07-11T00:17:09.187220473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:09.188504 containerd[1439]: time="2025-07-11T00:17:09.188472288Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.633520048s" Jul 11 00:17:09.188570 containerd[1439]: time="2025-07-11T00:17:09.188512003Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 11 00:17:09.189335 containerd[1439]: time="2025-07-11T00:17:09.189059986Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 11 00:17:09.695438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3773507556.mount: Deactivated successfully. Jul 11 00:17:09.715906 containerd[1439]: time="2025-07-11T00:17:09.715844393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:09.716819 containerd[1439]: time="2025-07-11T00:17:09.716639703Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 11 00:17:09.717728 containerd[1439]: time="2025-07-11T00:17:09.717664751Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:09.721364 containerd[1439]: time="2025-07-11T00:17:09.721290489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:09.722446 containerd[1439]: time="2025-07-11T00:17:09.722127041Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 533.029789ms" Jul 11 00:17:09.722446 containerd[1439]: time="2025-07-11T00:17:09.722165190Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 11 00:17:09.723117 containerd[1439]: time="2025-07-11T00:17:09.723083742Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 11 00:17:10.367323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount532859229.mount: Deactivated successfully. Jul 11 00:17:12.551071 containerd[1439]: time="2025-07-11T00:17:12.551017027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:12.552415 containerd[1439]: time="2025-07-11T00:17:12.552370778Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" Jul 11 00:17:12.553138 containerd[1439]: time="2025-07-11T00:17:12.553106066Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:12.556523 containerd[1439]: time="2025-07-11T00:17:12.556475224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:12.557721 containerd[1439]: time="2025-07-11T00:17:12.557679022Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.83455682s" Jul 11 00:17:12.557760 containerd[1439]: time="2025-07-11T00:17:12.557723739Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 11 00:17:14.661451 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 11 00:17:14.674795 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:17:14.771212 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:17:14.774811 (kubelet)[2010]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:17:14.806807 kubelet[2010]: E0711 00:17:14.806735 2010 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:17:14.809568 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:17:14.809728 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:17:17.652481 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:17:17.659730 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:17:17.680894 systemd[1]: Reloading requested from client PID 2026 ('systemctl') (unit session-7.scope)... Jul 11 00:17:17.680910 systemd[1]: Reloading... Jul 11 00:17:17.748480 zram_generator::config[2066]: No configuration found. Jul 11 00:17:17.862387 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:17:17.914918 systemd[1]: Reloading finished in 233 ms. Jul 11 00:17:17.954870 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 11 00:17:17.954945 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 11 00:17:17.955143 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:17:17.958434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:17:18.057788 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:17:18.061270 (kubelet)[2111]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:17:18.093629 kubelet[2111]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:17:18.093629 kubelet[2111]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 11 00:17:18.093629 kubelet[2111]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:17:18.093992 kubelet[2111]: I0711 00:17:18.093683 2111 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:17:19.305422 kubelet[2111]: I0711 00:17:19.305376 2111 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 11 00:17:19.305422 kubelet[2111]: I0711 00:17:19.305422 2111 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:17:19.305855 kubelet[2111]: I0711 00:17:19.305824 2111 server.go:954] "Client rotation is on, will bootstrap in background" Jul 11 00:17:19.344539 kubelet[2111]: E0711 00:17:19.344487 2111 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.77:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:17:19.347330 kubelet[2111]: I0711 00:17:19.347306 2111 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:17:19.356680 kubelet[2111]: E0711 00:17:19.356624 2111 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:17:19.356680 kubelet[2111]: I0711 00:17:19.356676 2111 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:17:19.359780 kubelet[2111]: I0711 00:17:19.359757 2111 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:17:19.361220 kubelet[2111]: I0711 00:17:19.361156 2111 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:17:19.361410 kubelet[2111]: I0711 00:17:19.361220 2111 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 00:17:19.361515 kubelet[2111]: I0711 00:17:19.361498 2111 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:17:19.361515 kubelet[2111]: I0711 00:17:19.361509 2111 container_manager_linux.go:304] "Creating device plugin manager" Jul 11 00:17:19.361739 kubelet[2111]: I0711 00:17:19.361715 2111 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:17:19.365849 kubelet[2111]: I0711 00:17:19.365816 2111 kubelet.go:446] "Attempting to sync node with API server" Jul 11 00:17:19.365849 kubelet[2111]: I0711 00:17:19.365847 2111 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:17:19.365918 kubelet[2111]: I0711 00:17:19.365873 2111 kubelet.go:352] "Adding apiserver pod source" Jul 11 00:17:19.365918 kubelet[2111]: I0711 00:17:19.365884 2111 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:17:19.367819 kubelet[2111]: W0711 00:17:19.367764 2111 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Jul 11 00:17:19.367865 kubelet[2111]: E0711 00:17:19.367825 2111 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:17:19.368881 kubelet[2111]: I0711 00:17:19.368853 2111 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 11 00:17:19.369500 kubelet[2111]: I0711 00:17:19.369479 2111 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:17:19.369625 kubelet[2111]: W0711 00:17:19.369610 2111 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 11 00:17:19.372423 kubelet[2111]: I0711 00:17:19.372384 2111 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 11 00:17:19.372483 kubelet[2111]: I0711 00:17:19.372435 2111 server.go:1287] "Started kubelet" Jul 11 00:17:19.372743 kubelet[2111]: I0711 00:17:19.372706 2111 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:17:19.373849 kubelet[2111]: I0711 00:17:19.373829 2111 server.go:479] "Adding debug handlers to kubelet server" Jul 11 00:17:19.379780 kubelet[2111]: I0711 00:17:19.379753 2111 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:17:19.380486 kubelet[2111]: I0711 00:17:19.380421 2111 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:17:19.380978 kubelet[2111]: I0711 00:17:19.380653 2111 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:17:19.382131 kubelet[2111]: E0711 00:17:19.381878 2111 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.77:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.77:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18510a4929b47789 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 00:17:19.372416905 +0000 UTC m=+1.308265460,LastTimestamp:2025-07-11 00:17:19.372416905 +0000 UTC m=+1.308265460,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 00:17:19.382764 kubelet[2111]: I0711 00:17:19.382743 2111 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 11 00:17:19.383114 kubelet[2111]: E0711 00:17:19.383048 2111 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:17:19.383558 kubelet[2111]: I0711 00:17:19.383429 2111 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 11 00:17:19.384128 kubelet[2111]: I0711 00:17:19.384110 2111 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:17:19.384389 kubelet[2111]: E0711 00:17:19.384357 2111 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="200ms" Jul 11 00:17:19.384589 kubelet[2111]: W0711 00:17:19.384550 2111 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Jul 11 00:17:19.384693 kubelet[2111]: E0711 00:17:19.384667 2111 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:17:19.384805 kubelet[2111]: I0711 00:17:19.384770 2111 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:17:19.384839 kubelet[2111]: I0711 00:17:19.384799 2111 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:17:19.385304 kubelet[2111]: I0711 00:17:19.384872 2111 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:17:19.385304 kubelet[2111]: W0711 00:17:19.385112 2111 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Jul 11 00:17:19.385304 kubelet[2111]: E0711 00:17:19.385149 2111 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:17:19.386312 kubelet[2111]: I0711 00:17:19.386291 2111 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:17:19.386648 kubelet[2111]: E0711 00:17:19.386599 2111 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:17:19.398866 kubelet[2111]: I0711 00:17:19.398828 2111 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 11 00:17:19.398866 kubelet[2111]: I0711 00:17:19.398849 2111 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 11 00:17:19.398866 kubelet[2111]: I0711 00:17:19.398867 2111 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:17:19.402651 kubelet[2111]: I0711 00:17:19.402598 2111 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:17:19.403879 kubelet[2111]: I0711 00:17:19.403800 2111 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:17:19.403879 kubelet[2111]: I0711 00:17:19.403828 2111 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 11 00:17:19.403879 kubelet[2111]: I0711 00:17:19.403847 2111 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 11 00:17:19.403879 kubelet[2111]: I0711 00:17:19.403853 2111 kubelet.go:2382] "Starting kubelet main sync loop" Jul 11 00:17:19.404015 kubelet[2111]: E0711 00:17:19.403900 2111 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:17:19.404608 kubelet[2111]: W0711 00:17:19.404552 2111 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Jul 11 00:17:19.404659 kubelet[2111]: E0711 00:17:19.404615 2111 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:17:19.483345 kubelet[2111]: E0711 00:17:19.483288 2111 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:17:19.504661 kubelet[2111]: E0711 00:17:19.504619 2111 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 11 00:17:19.567013 kubelet[2111]: I0711 00:17:19.566919 2111 policy_none.go:49] "None policy: Start" Jul 11 00:17:19.567013 kubelet[2111]: I0711 00:17:19.566946 2111 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 11 00:17:19.567013 kubelet[2111]: I0711 00:17:19.566960 2111 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:17:19.579482 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 11 00:17:19.583504 kubelet[2111]: E0711 00:17:19.583461 2111 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:17:19.584900 kubelet[2111]: E0711 00:17:19.584864 2111 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="400ms" Jul 11 00:17:19.593586 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 11 00:17:19.596166 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 11 00:17:19.608411 kubelet[2111]: I0711 00:17:19.608371 2111 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:17:19.608600 kubelet[2111]: I0711 00:17:19.608576 2111 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:17:19.608634 kubelet[2111]: I0711 00:17:19.608593 2111 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:17:19.608859 kubelet[2111]: I0711 00:17:19.608829 2111 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:17:19.609546 kubelet[2111]: E0711 00:17:19.609516 2111 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 11 00:17:19.609587 kubelet[2111]: E0711 00:17:19.609566 2111 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 11 00:17:19.709624 kubelet[2111]: I0711 00:17:19.709548 2111 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:17:19.709954 kubelet[2111]: E0711 00:17:19.709927 2111 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Jul 11 00:17:19.713203 systemd[1]: Created slice kubepods-burstable-pod14002455a1090d0c69fe259822a841ef.slice - libcontainer container kubepods-burstable-pod14002455a1090d0c69fe259822a841ef.slice. Jul 11 00:17:19.733732 kubelet[2111]: E0711 00:17:19.733696 2111 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:17:19.736154 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 11 00:17:19.737634 kubelet[2111]: E0711 00:17:19.737605 2111 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:17:19.739680 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 11 00:17:19.741104 kubelet[2111]: E0711 00:17:19.741084 2111 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:17:19.785819 kubelet[2111]: I0711 00:17:19.785774 2111 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:19.785819 kubelet[2111]: I0711 00:17:19.785816 2111 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14002455a1090d0c69fe259822a841ef-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"14002455a1090d0c69fe259822a841ef\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:17:19.785968 kubelet[2111]: I0711 00:17:19.785850 2111 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:19.785968 kubelet[2111]: I0711 00:17:19.785868 2111 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:19.785968 kubelet[2111]: I0711 00:17:19.785882 2111 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:19.785968 kubelet[2111]: I0711 00:17:19.785896 2111 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:17:19.785968 kubelet[2111]: I0711 00:17:19.785910 2111 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14002455a1090d0c69fe259822a841ef-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"14002455a1090d0c69fe259822a841ef\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:17:19.786077 kubelet[2111]: I0711 00:17:19.785932 2111 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14002455a1090d0c69fe259822a841ef-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"14002455a1090d0c69fe259822a841ef\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:17:19.786077 kubelet[2111]: I0711 00:17:19.785950 2111 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:19.912230 kubelet[2111]: I0711 00:17:19.912097 2111 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:17:19.912660 kubelet[2111]: E0711 00:17:19.912612 2111 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Jul 11 00:17:19.986299 kubelet[2111]: E0711 00:17:19.986255 2111 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="800ms" Jul 11 00:17:20.034623 kubelet[2111]: E0711 00:17:20.034589 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:20.035253 containerd[1439]: time="2025-07-11T00:17:20.035213069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:14002455a1090d0c69fe259822a841ef,Namespace:kube-system,Attempt:0,}" Jul 11 00:17:20.038435 kubelet[2111]: E0711 00:17:20.038400 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:20.038906 containerd[1439]: time="2025-07-11T00:17:20.038843347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 11 00:17:20.042310 kubelet[2111]: E0711 00:17:20.042281 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:20.042712 containerd[1439]: time="2025-07-11T00:17:20.042618916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 11 00:17:20.314062 kubelet[2111]: I0711 00:17:20.313964 2111 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:17:20.314371 kubelet[2111]: E0711 00:17:20.314270 2111 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Jul 11 00:17:20.452608 kubelet[2111]: W0711 00:17:20.452527 2111 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Jul 11 00:17:20.452608 kubelet[2111]: E0711 00:17:20.452609 2111 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:17:20.661672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2750048671.mount: Deactivated successfully. Jul 11 00:17:20.668032 containerd[1439]: time="2025-07-11T00:17:20.667987142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:17:20.669212 containerd[1439]: time="2025-07-11T00:17:20.669171211Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 11 00:17:20.669937 containerd[1439]: time="2025-07-11T00:17:20.669858672Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:17:20.671022 containerd[1439]: time="2025-07-11T00:17:20.670988291Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 11 00:17:20.671074 containerd[1439]: time="2025-07-11T00:17:20.671024724Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:17:20.672394 containerd[1439]: time="2025-07-11T00:17:20.672359770Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:17:20.673134 containerd[1439]: time="2025-07-11T00:17:20.673090990Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 11 00:17:20.676417 containerd[1439]: time="2025-07-11T00:17:20.676370631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:17:20.678340 containerd[1439]: time="2025-07-11T00:17:20.678263860Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 642.968877ms" Jul 11 00:17:20.679053 containerd[1439]: time="2025-07-11T00:17:20.678961730Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 640.04856ms" Jul 11 00:17:20.680842 containerd[1439]: time="2025-07-11T00:17:20.680802793Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 638.127506ms" Jul 11 00:17:20.771917 kubelet[2111]: W0711 00:17:20.771875 2111 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Jul 11 00:17:20.772035 kubelet[2111]: E0711 00:17:20.771925 2111 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:17:20.787885 kubelet[2111]: E0711 00:17:20.787801 2111 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="1.6s" Jul 11 00:17:20.824139 containerd[1439]: time="2025-07-11T00:17:20.822928394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:17:20.824139 containerd[1439]: time="2025-07-11T00:17:20.822980120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:17:20.824139 containerd[1439]: time="2025-07-11T00:17:20.822991010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:17:20.824139 containerd[1439]: time="2025-07-11T00:17:20.823074646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:17:20.825122 containerd[1439]: time="2025-07-11T00:17:20.824806850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:17:20.825122 containerd[1439]: time="2025-07-11T00:17:20.824870747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:17:20.825462 containerd[1439]: time="2025-07-11T00:17:20.825368757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:17:20.825462 containerd[1439]: time="2025-07-11T00:17:20.825438900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:17:20.825918 containerd[1439]: time="2025-07-11T00:17:20.825791378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:17:20.826001 containerd[1439]: time="2025-07-11T00:17:20.825738771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:17:20.826001 containerd[1439]: time="2025-07-11T00:17:20.825844066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:17:20.826069 containerd[1439]: time="2025-07-11T00:17:20.826007253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:17:20.849606 systemd[1]: Started cri-containerd-a339ff00195c770ebe9b6e577c7322e7881f0945e1027fd26f583db5fef18412.scope - libcontainer container a339ff00195c770ebe9b6e577c7322e7881f0945e1027fd26f583db5fef18412. Jul 11 00:17:20.850984 systemd[1]: Started cri-containerd-aeb33a4154ba7a5cbee01ef84603b9cf1fbfe01dcc0e51a45e6b65db3fbbe586.scope - libcontainer container aeb33a4154ba7a5cbee01ef84603b9cf1fbfe01dcc0e51a45e6b65db3fbbe586. Jul 11 00:17:20.854375 systemd[1]: Started cri-containerd-6f10282e3ada4b58aaaefe2886c6be66f7114c24225511b39ba14f57e0cc576b.scope - libcontainer container 6f10282e3ada4b58aaaefe2886c6be66f7114c24225511b39ba14f57e0cc576b. Jul 11 00:17:20.892711 containerd[1439]: time="2025-07-11T00:17:20.892672684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f10282e3ada4b58aaaefe2886c6be66f7114c24225511b39ba14f57e0cc576b\"" Jul 11 00:17:20.893594 containerd[1439]: time="2025-07-11T00:17:20.893546953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:14002455a1090d0c69fe259822a841ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"a339ff00195c770ebe9b6e577c7322e7881f0945e1027fd26f583db5fef18412\"" Jul 11 00:17:20.894258 kubelet[2111]: E0711 00:17:20.894088 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:20.894258 kubelet[2111]: W0711 00:17:20.894188 2111 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Jul 11 00:17:20.894258 kubelet[2111]: E0711 00:17:20.894220 2111 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:17:20.894396 kubelet[2111]: E0711 00:17:20.894260 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:20.895606 containerd[1439]: time="2025-07-11T00:17:20.895543716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"aeb33a4154ba7a5cbee01ef84603b9cf1fbfe01dcc0e51a45e6b65db3fbbe586\"" Jul 11 00:17:20.896334 containerd[1439]: time="2025-07-11T00:17:20.896302361Z" level=info msg="CreateContainer within sandbox \"6f10282e3ada4b58aaaefe2886c6be66f7114c24225511b39ba14f57e0cc576b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 11 00:17:20.897699 containerd[1439]: time="2025-07-11T00:17:20.897661228Z" level=info msg="CreateContainer within sandbox \"a339ff00195c770ebe9b6e577c7322e7881f0945e1027fd26f583db5fef18412\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 11 00:17:20.899432 kubelet[2111]: E0711 00:17:20.897904 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:20.901545 containerd[1439]: time="2025-07-11T00:17:20.901513145Z" level=info msg="CreateContainer within sandbox \"aeb33a4154ba7a5cbee01ef84603b9cf1fbfe01dcc0e51a45e6b65db3fbbe586\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 11 00:17:20.906260 kubelet[2111]: W0711 00:17:20.906149 2111 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Jul 11 00:17:20.906260 kubelet[2111]: E0711 00:17:20.906229 2111 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:17:20.916096 containerd[1439]: time="2025-07-11T00:17:20.915890086Z" level=info msg="CreateContainer within sandbox \"6f10282e3ada4b58aaaefe2886c6be66f7114c24225511b39ba14f57e0cc576b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e217d1b5339acbff20e6d4bd2e44f8714958fb155048d926d751c388c9aed754\"" Jul 11 00:17:20.917788 containerd[1439]: time="2025-07-11T00:17:20.917751687Z" level=info msg="StartContainer for \"e217d1b5339acbff20e6d4bd2e44f8714958fb155048d926d751c388c9aed754\"" Jul 11 00:17:20.922735 containerd[1439]: time="2025-07-11T00:17:20.922700635Z" level=info msg="CreateContainer within sandbox \"a339ff00195c770ebe9b6e577c7322e7881f0945e1027fd26f583db5fef18412\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"12f1625164b85dcedb2ab2f52db0b2aefa62a547e1c8050270f84a39ffe8db37\"" Jul 11 00:17:20.923268 containerd[1439]: time="2025-07-11T00:17:20.923243885Z" level=info msg="StartContainer for \"12f1625164b85dcedb2ab2f52db0b2aefa62a547e1c8050270f84a39ffe8db37\"" Jul 11 00:17:20.927654 containerd[1439]: time="2025-07-11T00:17:20.927603341Z" level=info msg="CreateContainer within sandbox \"aeb33a4154ba7a5cbee01ef84603b9cf1fbfe01dcc0e51a45e6b65db3fbbe586\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c7647c5a730ea7293c3e711ba80f9be1b1849fd41929a528f3c7aed5de34f0d8\"" Jul 11 00:17:20.928120 containerd[1439]: time="2025-07-11T00:17:20.928090421Z" level=info msg="StartContainer for \"c7647c5a730ea7293c3e711ba80f9be1b1849fd41929a528f3c7aed5de34f0d8\"" Jul 11 00:17:20.943604 systemd[1]: Started cri-containerd-e217d1b5339acbff20e6d4bd2e44f8714958fb155048d926d751c388c9aed754.scope - libcontainer container e217d1b5339acbff20e6d4bd2e44f8714958fb155048d926d751c388c9aed754. Jul 11 00:17:20.946772 systemd[1]: Started cri-containerd-12f1625164b85dcedb2ab2f52db0b2aefa62a547e1c8050270f84a39ffe8db37.scope - libcontainer container 12f1625164b85dcedb2ab2f52db0b2aefa62a547e1c8050270f84a39ffe8db37. Jul 11 00:17:20.965642 systemd[1]: Started cri-containerd-c7647c5a730ea7293c3e711ba80f9be1b1849fd41929a528f3c7aed5de34f0d8.scope - libcontainer container c7647c5a730ea7293c3e711ba80f9be1b1849fd41929a528f3c7aed5de34f0d8. Jul 11 00:17:21.067601 containerd[1439]: time="2025-07-11T00:17:21.067524633Z" level=info msg="StartContainer for \"12f1625164b85dcedb2ab2f52db0b2aefa62a547e1c8050270f84a39ffe8db37\" returns successfully" Jul 11 00:17:21.068479 containerd[1439]: time="2025-07-11T00:17:21.067584400Z" level=info msg="StartContainer for \"c7647c5a730ea7293c3e711ba80f9be1b1849fd41929a528f3c7aed5de34f0d8\" returns successfully" Jul 11 00:17:21.068827 containerd[1439]: time="2025-07-11T00:17:21.067590085Z" level=info msg="StartContainer for \"e217d1b5339acbff20e6d4bd2e44f8714958fb155048d926d751c388c9aed754\" returns successfully" Jul 11 00:17:21.115935 kubelet[2111]: I0711 00:17:21.115724 2111 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:17:21.116334 kubelet[2111]: E0711 00:17:21.116305 2111 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Jul 11 00:17:21.412390 kubelet[2111]: E0711 00:17:21.412039 2111 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:17:21.412390 kubelet[2111]: E0711 00:17:21.412200 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:21.417958 kubelet[2111]: E0711 00:17:21.417578 2111 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:17:21.417958 kubelet[2111]: E0711 00:17:21.417837 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:21.418676 kubelet[2111]: E0711 00:17:21.418473 2111 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:17:21.418676 kubelet[2111]: E0711 00:17:21.418602 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:22.421825 kubelet[2111]: E0711 00:17:22.421798 2111 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:17:22.422455 kubelet[2111]: E0711 00:17:22.421962 2111 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 00:17:22.422455 kubelet[2111]: E0711 00:17:22.422372 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:22.422455 kubelet[2111]: E0711 00:17:22.422378 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:22.717772 kubelet[2111]: I0711 00:17:22.717426 2111 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:17:22.783965 kubelet[2111]: E0711 00:17:22.783919 2111 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 11 00:17:22.855059 kubelet[2111]: I0711 00:17:22.854931 2111 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 11 00:17:22.855059 kubelet[2111]: E0711 00:17:22.854961 2111 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 11 00:17:22.867371 kubelet[2111]: E0711 00:17:22.867155 2111 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:17:22.968028 kubelet[2111]: E0711 00:17:22.967914 2111 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:17:23.068516 kubelet[2111]: E0711 00:17:23.068465 2111 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:17:23.168616 kubelet[2111]: E0711 00:17:23.168572 2111 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:17:23.284454 kubelet[2111]: I0711 00:17:23.284316 2111 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:23.292690 kubelet[2111]: E0711 00:17:23.292659 2111 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:23.292690 kubelet[2111]: I0711 00:17:23.292688 2111 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:17:23.294538 kubelet[2111]: E0711 00:17:23.294508 2111 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 11 00:17:23.294538 kubelet[2111]: I0711 00:17:23.294534 2111 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:17:23.296104 kubelet[2111]: E0711 00:17:23.296051 2111 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 11 00:17:23.370771 kubelet[2111]: I0711 00:17:23.370719 2111 apiserver.go:52] "Watching apiserver" Jul 11 00:17:23.383509 kubelet[2111]: I0711 00:17:23.383468 2111 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 11 00:17:23.986737 kubelet[2111]: I0711 00:17:23.986702 2111 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:17:24.004638 kubelet[2111]: E0711 00:17:24.004593 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:24.425474 kubelet[2111]: E0711 00:17:24.425165 2111 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:25.043866 systemd[1]: Reloading requested from client PID 2389 ('systemctl') (unit session-7.scope)... Jul 11 00:17:25.043881 systemd[1]: Reloading... Jul 11 00:17:25.113451 zram_generator::config[2431]: No configuration found. Jul 11 00:17:25.201623 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:17:25.269450 systemd[1]: Reloading finished in 225 ms. Jul 11 00:17:25.302214 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:17:25.311836 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 00:17:25.312064 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:17:25.312112 systemd[1]: kubelet.service: Consumed 1.682s CPU time, 132.4M memory peak, 0B memory swap peak. Jul 11 00:17:25.327729 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:17:25.432947 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:17:25.437300 (kubelet)[2470]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:17:25.471819 kubelet[2470]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:17:25.471819 kubelet[2470]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 11 00:17:25.471819 kubelet[2470]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:17:25.472175 kubelet[2470]: I0711 00:17:25.471877 2470 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:17:25.480083 kubelet[2470]: I0711 00:17:25.480029 2470 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 11 00:17:25.480083 kubelet[2470]: I0711 00:17:25.480067 2470 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:17:25.480375 kubelet[2470]: I0711 00:17:25.480345 2470 server.go:954] "Client rotation is on, will bootstrap in background" Jul 11 00:17:25.481633 kubelet[2470]: I0711 00:17:25.481609 2470 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 11 00:17:25.484190 kubelet[2470]: I0711 00:17:25.484159 2470 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:17:25.487014 kubelet[2470]: E0711 00:17:25.486987 2470 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:17:25.487014 kubelet[2470]: I0711 00:17:25.487013 2470 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:17:25.489486 kubelet[2470]: I0711 00:17:25.489460 2470 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:17:25.489946 kubelet[2470]: I0711 00:17:25.489903 2470 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:17:25.490389 kubelet[2470]: I0711 00:17:25.489959 2470 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 00:17:25.490389 kubelet[2470]: I0711 00:17:25.490273 2470 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:17:25.490389 kubelet[2470]: I0711 00:17:25.490285 2470 container_manager_linux.go:304] "Creating device plugin manager" Jul 11 00:17:25.490389 kubelet[2470]: I0711 00:17:25.490332 2470 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:17:25.490615 kubelet[2470]: I0711 00:17:25.490594 2470 kubelet.go:446] "Attempting to sync node with API server" Jul 11 00:17:25.490615 kubelet[2470]: I0711 00:17:25.490615 2470 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:17:25.490670 kubelet[2470]: I0711 00:17:25.490635 2470 kubelet.go:352] "Adding apiserver pod source" Jul 11 00:17:25.490670 kubelet[2470]: I0711 00:17:25.490645 2470 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:17:25.491871 kubelet[2470]: I0711 00:17:25.491851 2470 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 11 00:17:25.492533 kubelet[2470]: I0711 00:17:25.492515 2470 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:17:25.494424 kubelet[2470]: I0711 00:17:25.492984 2470 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 11 00:17:25.494424 kubelet[2470]: I0711 00:17:25.493078 2470 server.go:1287] "Started kubelet" Jul 11 00:17:25.494424 kubelet[2470]: I0711 00:17:25.493078 2470 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:17:25.494424 kubelet[2470]: I0711 00:17:25.493264 2470 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:17:25.494424 kubelet[2470]: I0711 00:17:25.493670 2470 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:17:25.494840 kubelet[2470]: I0711 00:17:25.494820 2470 server.go:479] "Adding debug handlers to kubelet server" Jul 11 00:17:25.495040 kubelet[2470]: I0711 00:17:25.495023 2470 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:17:25.495349 kubelet[2470]: I0711 00:17:25.495327 2470 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:17:25.495731 kubelet[2470]: I0711 00:17:25.495709 2470 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 11 00:17:25.495846 kubelet[2470]: E0711 00:17:25.495816 2470 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:17:25.496044 kubelet[2470]: I0711 00:17:25.496029 2470 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 11 00:17:25.496163 kubelet[2470]: I0711 00:17:25.496132 2470 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:17:25.508459 kubelet[2470]: I0711 00:17:25.500298 2470 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:17:25.508459 kubelet[2470]: I0711 00:17:25.500401 2470 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:17:25.517620 kubelet[2470]: E0711 00:17:25.516774 2470 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:17:25.518601 kubelet[2470]: I0711 00:17:25.518567 2470 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:17:25.526554 kubelet[2470]: I0711 00:17:25.526518 2470 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:17:25.527627 kubelet[2470]: I0711 00:17:25.527600 2470 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:17:25.527627 kubelet[2470]: I0711 00:17:25.527629 2470 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 11 00:17:25.527716 kubelet[2470]: I0711 00:17:25.527685 2470 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 11 00:17:25.527716 kubelet[2470]: I0711 00:17:25.527695 2470 kubelet.go:2382] "Starting kubelet main sync loop" Jul 11 00:17:25.527760 kubelet[2470]: E0711 00:17:25.527738 2470 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:17:25.551065 kubelet[2470]: I0711 00:17:25.551020 2470 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 11 00:17:25.551065 kubelet[2470]: I0711 00:17:25.551043 2470 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 11 00:17:25.551065 kubelet[2470]: I0711 00:17:25.551064 2470 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:17:25.551240 kubelet[2470]: I0711 00:17:25.551216 2470 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 11 00:17:25.551240 kubelet[2470]: I0711 00:17:25.551230 2470 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 11 00:17:25.551282 kubelet[2470]: I0711 00:17:25.551248 2470 policy_none.go:49] "None policy: Start" Jul 11 00:17:25.551282 kubelet[2470]: I0711 00:17:25.551256 2470 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 11 00:17:25.551282 kubelet[2470]: I0711 00:17:25.551265 2470 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:17:25.551392 kubelet[2470]: I0711 00:17:25.551374 2470 state_mem.go:75] "Updated machine memory state" Jul 11 00:17:25.554875 kubelet[2470]: I0711 00:17:25.554782 2470 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:17:25.555286 kubelet[2470]: I0711 00:17:25.554959 2470 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:17:25.555286 kubelet[2470]: I0711 00:17:25.554976 2470 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:17:25.555286 kubelet[2470]: I0711 00:17:25.555180 2470 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:17:25.557380 kubelet[2470]: E0711 00:17:25.557349 2470 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 11 00:17:25.628914 kubelet[2470]: I0711 00:17:25.628716 2470 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 00:17:25.628914 kubelet[2470]: I0711 00:17:25.628737 2470 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 00:17:25.630542 kubelet[2470]: I0711 00:17:25.630439 2470 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:25.636717 kubelet[2470]: E0711 00:17:25.636650 2470 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 11 00:17:25.660192 kubelet[2470]: I0711 00:17:25.660165 2470 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 00:17:25.666037 kubelet[2470]: I0711 00:17:25.666004 2470 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 11 00:17:25.666147 kubelet[2470]: I0711 00:17:25.666111 2470 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 11 00:17:25.697550 kubelet[2470]: I0711 00:17:25.697505 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:25.697550 kubelet[2470]: I0711 00:17:25.697543 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:25.697713 kubelet[2470]: I0711 00:17:25.697565 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:25.697713 kubelet[2470]: I0711 00:17:25.697581 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:25.697713 kubelet[2470]: I0711 00:17:25.697600 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14002455a1090d0c69fe259822a841ef-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"14002455a1090d0c69fe259822a841ef\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:17:25.697713 kubelet[2470]: I0711 00:17:25.697614 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14002455a1090d0c69fe259822a841ef-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"14002455a1090d0c69fe259822a841ef\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:17:25.697713 kubelet[2470]: I0711 00:17:25.697629 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:17:25.697812 kubelet[2470]: I0711 00:17:25.697646 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14002455a1090d0c69fe259822a841ef-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"14002455a1090d0c69fe259822a841ef\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:17:25.697812 kubelet[2470]: I0711 00:17:25.697663 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:17:25.934023 kubelet[2470]: E0711 00:17:25.933895 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:25.936495 kubelet[2470]: E0711 00:17:25.936470 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:25.937318 kubelet[2470]: E0711 00:17:25.937250 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:26.492680 kubelet[2470]: I0711 00:17:26.492290 2470 apiserver.go:52] "Watching apiserver" Jul 11 00:17:26.497049 kubelet[2470]: I0711 00:17:26.496997 2470 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 11 00:17:26.540711 kubelet[2470]: E0711 00:17:26.540531 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:26.540711 kubelet[2470]: E0711 00:17:26.540647 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:26.541658 kubelet[2470]: E0711 00:17:26.541028 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:26.591909 kubelet[2470]: I0711 00:17:26.590813 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.590795037 podStartE2EDuration="1.590795037s" podCreationTimestamp="2025-07-11 00:17:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:17:26.559599823 +0000 UTC m=+1.119319355" watchObservedRunningTime="2025-07-11 00:17:26.590795037 +0000 UTC m=+1.150514529" Jul 11 00:17:26.622081 kubelet[2470]: I0711 00:17:26.622010 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.621991172 podStartE2EDuration="3.621991172s" podCreationTimestamp="2025-07-11 00:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:17:26.590938333 +0000 UTC m=+1.150657825" watchObservedRunningTime="2025-07-11 00:17:26.621991172 +0000 UTC m=+1.181710704" Jul 11 00:17:26.631070 kubelet[2470]: I0711 00:17:26.630965 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.6309474000000002 podStartE2EDuration="1.6309474s" podCreationTimestamp="2025-07-11 00:17:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:17:26.622294135 +0000 UTC m=+1.182013747" watchObservedRunningTime="2025-07-11 00:17:26.6309474 +0000 UTC m=+1.190666892" Jul 11 00:17:27.542057 kubelet[2470]: E0711 00:17:27.541430 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:27.542534 kubelet[2470]: E0711 00:17:27.542470 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:28.542867 kubelet[2470]: E0711 00:17:28.542832 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:32.007469 kubelet[2470]: I0711 00:17:32.007436 2470 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 11 00:17:32.008168 containerd[1439]: time="2025-07-11T00:17:32.008034758Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 11 00:17:32.008460 kubelet[2470]: I0711 00:17:32.008317 2470 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 11 00:17:32.650055 systemd[1]: Created slice kubepods-besteffort-podc96deaa0_8d87_4902_8903_9690eb8e86a0.slice - libcontainer container kubepods-besteffort-podc96deaa0_8d87_4902_8903_9690eb8e86a0.slice. Jul 11 00:17:32.743660 kubelet[2470]: I0711 00:17:32.743620 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c96deaa0-8d87-4902-8903-9690eb8e86a0-xtables-lock\") pod \"kube-proxy-7n4hd\" (UID: \"c96deaa0-8d87-4902-8903-9690eb8e86a0\") " pod="kube-system/kube-proxy-7n4hd" Jul 11 00:17:32.743660 kubelet[2470]: I0711 00:17:32.743673 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c96deaa0-8d87-4902-8903-9690eb8e86a0-lib-modules\") pod \"kube-proxy-7n4hd\" (UID: \"c96deaa0-8d87-4902-8903-9690eb8e86a0\") " pod="kube-system/kube-proxy-7n4hd" Jul 11 00:17:32.743837 kubelet[2470]: I0711 00:17:32.743698 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c96deaa0-8d87-4902-8903-9690eb8e86a0-kube-proxy\") pod \"kube-proxy-7n4hd\" (UID: \"c96deaa0-8d87-4902-8903-9690eb8e86a0\") " pod="kube-system/kube-proxy-7n4hd" Jul 11 00:17:32.743837 kubelet[2470]: I0711 00:17:32.743716 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wcgs\" (UniqueName: \"kubernetes.io/projected/c96deaa0-8d87-4902-8903-9690eb8e86a0-kube-api-access-9wcgs\") pod \"kube-proxy-7n4hd\" (UID: \"c96deaa0-8d87-4902-8903-9690eb8e86a0\") " pod="kube-system/kube-proxy-7n4hd" Jul 11 00:17:32.959004 kubelet[2470]: E0711 00:17:32.958886 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:32.959901 containerd[1439]: time="2025-07-11T00:17:32.959782741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7n4hd,Uid:c96deaa0-8d87-4902-8903-9690eb8e86a0,Namespace:kube-system,Attempt:0,}" Jul 11 00:17:32.982355 containerd[1439]: time="2025-07-11T00:17:32.982089103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:17:32.982355 containerd[1439]: time="2025-07-11T00:17:32.982160658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:17:32.982355 containerd[1439]: time="2025-07-11T00:17:32.982172783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:17:32.982355 containerd[1439]: time="2025-07-11T00:17:32.982271831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:17:33.005658 systemd[1]: Started cri-containerd-43e306b043f3a6e022d2ca54391de986994514828d3c18ddcaa42c7d04db1047.scope - libcontainer container 43e306b043f3a6e022d2ca54391de986994514828d3c18ddcaa42c7d04db1047. Jul 11 00:17:33.030162 containerd[1439]: time="2025-07-11T00:17:33.028451854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7n4hd,Uid:c96deaa0-8d87-4902-8903-9690eb8e86a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"43e306b043f3a6e022d2ca54391de986994514828d3c18ddcaa42c7d04db1047\"" Jul 11 00:17:33.031843 kubelet[2470]: E0711 00:17:33.031601 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:33.036482 containerd[1439]: time="2025-07-11T00:17:33.035971602Z" level=info msg="CreateContainer within sandbox \"43e306b043f3a6e022d2ca54391de986994514828d3c18ddcaa42c7d04db1047\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 11 00:17:33.045350 systemd[1]: Created slice kubepods-besteffort-pod3e3eb77d_fef6_4c89_bf07_8405b9a4f95f.slice - libcontainer container kubepods-besteffort-pod3e3eb77d_fef6_4c89_bf07_8405b9a4f95f.slice. Jul 11 00:17:33.046136 kubelet[2470]: I0711 00:17:33.045828 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-527rz\" (UniqueName: \"kubernetes.io/projected/3e3eb77d-fef6-4c89-bf07-8405b9a4f95f-kube-api-access-527rz\") pod \"tigera-operator-747864d56d-sxps4\" (UID: \"3e3eb77d-fef6-4c89-bf07-8405b9a4f95f\") " pod="tigera-operator/tigera-operator-747864d56d-sxps4" Jul 11 00:17:33.046136 kubelet[2470]: I0711 00:17:33.045880 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3e3eb77d-fef6-4c89-bf07-8405b9a4f95f-var-lib-calico\") pod \"tigera-operator-747864d56d-sxps4\" (UID: \"3e3eb77d-fef6-4c89-bf07-8405b9a4f95f\") " pod="tigera-operator/tigera-operator-747864d56d-sxps4" Jul 11 00:17:33.060155 containerd[1439]: time="2025-07-11T00:17:33.060102362Z" level=info msg="CreateContainer within sandbox \"43e306b043f3a6e022d2ca54391de986994514828d3c18ddcaa42c7d04db1047\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9f9acc6509b8a6ed015d37912f434031940b96fe8c58e06ad0cae9c5d6b0993a\"" Jul 11 00:17:33.060823 containerd[1439]: time="2025-07-11T00:17:33.060768266Z" level=info msg="StartContainer for \"9f9acc6509b8a6ed015d37912f434031940b96fe8c58e06ad0cae9c5d6b0993a\"" Jul 11 00:17:33.088589 systemd[1]: Started cri-containerd-9f9acc6509b8a6ed015d37912f434031940b96fe8c58e06ad0cae9c5d6b0993a.scope - libcontainer container 9f9acc6509b8a6ed015d37912f434031940b96fe8c58e06ad0cae9c5d6b0993a. Jul 11 00:17:33.118053 containerd[1439]: time="2025-07-11T00:17:33.117996994Z" level=info msg="StartContainer for \"9f9acc6509b8a6ed015d37912f434031940b96fe8c58e06ad0cae9c5d6b0993a\" returns successfully" Jul 11 00:17:33.350222 containerd[1439]: time="2025-07-11T00:17:33.350110088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-sxps4,Uid:3e3eb77d-fef6-4c89-bf07-8405b9a4f95f,Namespace:tigera-operator,Attempt:0,}" Jul 11 00:17:33.370477 containerd[1439]: time="2025-07-11T00:17:33.370381849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:17:33.370477 containerd[1439]: time="2025-07-11T00:17:33.370449119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:17:33.370477 containerd[1439]: time="2025-07-11T00:17:33.370461045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:17:33.370664 containerd[1439]: time="2025-07-11T00:17:33.370545443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:17:33.387642 systemd[1]: Started cri-containerd-5ba6e79a558a9cbd17bc0128df0c755c52ee3e6b96249e9f723bad17fd2f11f3.scope - libcontainer container 5ba6e79a558a9cbd17bc0128df0c755c52ee3e6b96249e9f723bad17fd2f11f3. Jul 11 00:17:33.417024 containerd[1439]: time="2025-07-11T00:17:33.416961043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-sxps4,Uid:3e3eb77d-fef6-4c89-bf07-8405b9a4f95f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5ba6e79a558a9cbd17bc0128df0c755c52ee3e6b96249e9f723bad17fd2f11f3\"" Jul 11 00:17:33.418619 containerd[1439]: time="2025-07-11T00:17:33.418586304Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 11 00:17:33.556622 kubelet[2470]: E0711 00:17:33.556575 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:33.567234 kubelet[2470]: I0711 00:17:33.567168 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7n4hd" podStartSLOduration=1.566992037 podStartE2EDuration="1.566992037s" podCreationTimestamp="2025-07-11 00:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:17:33.566946617 +0000 UTC m=+8.126666149" watchObservedRunningTime="2025-07-11 00:17:33.566992037 +0000 UTC m=+8.126711569" Jul 11 00:17:33.840984 kubelet[2470]: E0711 00:17:33.840589 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:34.554342 kubelet[2470]: E0711 00:17:34.554309 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:34.816164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount700320775.mount: Deactivated successfully. Jul 11 00:17:34.889699 kubelet[2470]: E0711 00:17:34.889666 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:35.124133 containerd[1439]: time="2025-07-11T00:17:35.123805347Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:35.124810 containerd[1439]: time="2025-07-11T00:17:35.124783789Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 11 00:17:35.126058 containerd[1439]: time="2025-07-11T00:17:35.126013494Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:35.128593 containerd[1439]: time="2025-07-11T00:17:35.128553057Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:35.130208 containerd[1439]: time="2025-07-11T00:17:35.130168921Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.711545121s" Jul 11 00:17:35.130239 containerd[1439]: time="2025-07-11T00:17:35.130219781Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 11 00:17:35.132092 containerd[1439]: time="2025-07-11T00:17:35.132051814Z" level=info msg="CreateContainer within sandbox \"5ba6e79a558a9cbd17bc0128df0c755c52ee3e6b96249e9f723bad17fd2f11f3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 11 00:17:35.145339 containerd[1439]: time="2025-07-11T00:17:35.145280368Z" level=info msg="CreateContainer within sandbox \"5ba6e79a558a9cbd17bc0128df0c755c52ee3e6b96249e9f723bad17fd2f11f3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"851f597b062a48fa9087c3cc3fb1174ea349cc7efec7dce4f0564ee2ae1f4590\"" Jul 11 00:17:35.145901 containerd[1439]: time="2025-07-11T00:17:35.145865608Z" level=info msg="StartContainer for \"851f597b062a48fa9087c3cc3fb1174ea349cc7efec7dce4f0564ee2ae1f4590\"" Jul 11 00:17:35.173629 systemd[1]: Started cri-containerd-851f597b062a48fa9087c3cc3fb1174ea349cc7efec7dce4f0564ee2ae1f4590.scope - libcontainer container 851f597b062a48fa9087c3cc3fb1174ea349cc7efec7dce4f0564ee2ae1f4590. Jul 11 00:17:35.199796 containerd[1439]: time="2025-07-11T00:17:35.199723771Z" level=info msg="StartContainer for \"851f597b062a48fa9087c3cc3fb1174ea349cc7efec7dce4f0564ee2ae1f4590\" returns successfully" Jul 11 00:17:35.566367 kubelet[2470]: E0711 00:17:35.558397 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:35.566367 kubelet[2470]: E0711 00:17:35.558611 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:37.018428 kubelet[2470]: E0711 00:17:37.015453 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:37.029750 kubelet[2470]: I0711 00:17:37.029583 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-sxps4" podStartSLOduration=3.316750229 podStartE2EDuration="5.029564732s" podCreationTimestamp="2025-07-11 00:17:32 +0000 UTC" firstStartedPulling="2025-07-11 00:17:33.418141061 +0000 UTC m=+7.977860593" lastFinishedPulling="2025-07-11 00:17:35.130955604 +0000 UTC m=+9.690675096" observedRunningTime="2025-07-11 00:17:35.581148088 +0000 UTC m=+10.140867620" watchObservedRunningTime="2025-07-11 00:17:37.029564732 +0000 UTC m=+11.589284224" Jul 11 00:17:37.079517 update_engine[1426]: I20250711 00:17:37.079447 1426 update_attempter.cc:509] Updating boot flags... Jul 11 00:17:37.144486 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2829) Jul 11 00:17:37.188547 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2829) Jul 11 00:17:40.695260 sudo[1615]: pam_unix(sudo:session): session closed for user root Jul 11 00:17:40.702690 sshd[1612]: pam_unix(sshd:session): session closed for user core Jul 11 00:17:40.705313 systemd[1]: sshd@6-10.0.0.77:22-10.0.0.1:57268.service: Deactivated successfully. Jul 11 00:17:40.707243 systemd[1]: session-7.scope: Deactivated successfully. Jul 11 00:17:40.707604 systemd[1]: session-7.scope: Consumed 7.125s CPU time, 154.7M memory peak, 0B memory swap peak. Jul 11 00:17:40.710778 systemd-logind[1419]: Session 7 logged out. Waiting for processes to exit. Jul 11 00:17:40.711918 systemd-logind[1419]: Removed session 7. Jul 11 00:17:45.721005 systemd[1]: Created slice kubepods-besteffort-pod8940f1c2_daaa_42ef_a8c9_6330a4191b87.slice - libcontainer container kubepods-besteffort-pod8940f1c2_daaa_42ef_a8c9_6330a4191b87.slice. Jul 11 00:17:45.735898 kubelet[2470]: I0711 00:17:45.735850 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8940f1c2-daaa-42ef-a8c9-6330a4191b87-typha-certs\") pod \"calico-typha-7dd447d85-4rcsj\" (UID: \"8940f1c2-daaa-42ef-a8c9-6330a4191b87\") " pod="calico-system/calico-typha-7dd447d85-4rcsj" Jul 11 00:17:45.735898 kubelet[2470]: I0711 00:17:45.735895 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnhhr\" (UniqueName: \"kubernetes.io/projected/8940f1c2-daaa-42ef-a8c9-6330a4191b87-kube-api-access-hnhhr\") pod \"calico-typha-7dd447d85-4rcsj\" (UID: \"8940f1c2-daaa-42ef-a8c9-6330a4191b87\") " pod="calico-system/calico-typha-7dd447d85-4rcsj" Jul 11 00:17:45.736435 kubelet[2470]: I0711 00:17:45.735916 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8940f1c2-daaa-42ef-a8c9-6330a4191b87-tigera-ca-bundle\") pod \"calico-typha-7dd447d85-4rcsj\" (UID: \"8940f1c2-daaa-42ef-a8c9-6330a4191b87\") " pod="calico-system/calico-typha-7dd447d85-4rcsj" Jul 11 00:17:46.008685 systemd[1]: Created slice kubepods-besteffort-podda1b4116_13b9_446c_be91_deef34c93d4d.slice - libcontainer container kubepods-besteffort-podda1b4116_13b9_446c_be91_deef34c93d4d.slice. Jul 11 00:17:46.024873 kubelet[2470]: E0711 00:17:46.024828 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:46.026301 containerd[1439]: time="2025-07-11T00:17:46.025910759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7dd447d85-4rcsj,Uid:8940f1c2-daaa-42ef-a8c9-6330a4191b87,Namespace:calico-system,Attempt:0,}" Jul 11 00:17:46.038023 kubelet[2470]: I0711 00:17:46.037978 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/da1b4116-13b9-446c-be91-deef34c93d4d-node-certs\") pod \"calico-node-xfh4h\" (UID: \"da1b4116-13b9-446c-be91-deef34c93d4d\") " pod="calico-system/calico-node-xfh4h" Jul 11 00:17:46.038142 kubelet[2470]: I0711 00:17:46.038030 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da1b4116-13b9-446c-be91-deef34c93d4d-xtables-lock\") pod \"calico-node-xfh4h\" (UID: \"da1b4116-13b9-446c-be91-deef34c93d4d\") " pod="calico-system/calico-node-xfh4h" Jul 11 00:17:46.038142 kubelet[2470]: I0711 00:17:46.038054 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf7wx\" (UniqueName: \"kubernetes.io/projected/da1b4116-13b9-446c-be91-deef34c93d4d-kube-api-access-nf7wx\") pod \"calico-node-xfh4h\" (UID: \"da1b4116-13b9-446c-be91-deef34c93d4d\") " pod="calico-system/calico-node-xfh4h" Jul 11 00:17:46.038142 kubelet[2470]: I0711 00:17:46.038077 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/da1b4116-13b9-446c-be91-deef34c93d4d-cni-net-dir\") pod \"calico-node-xfh4h\" (UID: \"da1b4116-13b9-446c-be91-deef34c93d4d\") " pod="calico-system/calico-node-xfh4h" Jul 11 00:17:46.038142 kubelet[2470]: I0711 00:17:46.038092 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/da1b4116-13b9-446c-be91-deef34c93d4d-cni-log-dir\") pod \"calico-node-xfh4h\" (UID: \"da1b4116-13b9-446c-be91-deef34c93d4d\") " pod="calico-system/calico-node-xfh4h" Jul 11 00:17:46.038142 kubelet[2470]: I0711 00:17:46.038106 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/da1b4116-13b9-446c-be91-deef34c93d4d-cni-bin-dir\") pod \"calico-node-xfh4h\" (UID: \"da1b4116-13b9-446c-be91-deef34c93d4d\") " pod="calico-system/calico-node-xfh4h" Jul 11 00:17:46.038317 kubelet[2470]: I0711 00:17:46.038123 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/da1b4116-13b9-446c-be91-deef34c93d4d-flexvol-driver-host\") pod \"calico-node-xfh4h\" (UID: \"da1b4116-13b9-446c-be91-deef34c93d4d\") " pod="calico-system/calico-node-xfh4h" Jul 11 00:17:46.038317 kubelet[2470]: I0711 00:17:46.038138 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da1b4116-13b9-446c-be91-deef34c93d4d-lib-modules\") pod \"calico-node-xfh4h\" (UID: \"da1b4116-13b9-446c-be91-deef34c93d4d\") " pod="calico-system/calico-node-xfh4h" Jul 11 00:17:46.038317 kubelet[2470]: I0711 00:17:46.038151 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/da1b4116-13b9-446c-be91-deef34c93d4d-policysync\") pod \"calico-node-xfh4h\" (UID: \"da1b4116-13b9-446c-be91-deef34c93d4d\") " pod="calico-system/calico-node-xfh4h" Jul 11 00:17:46.038317 kubelet[2470]: I0711 00:17:46.038165 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da1b4116-13b9-446c-be91-deef34c93d4d-tigera-ca-bundle\") pod \"calico-node-xfh4h\" (UID: \"da1b4116-13b9-446c-be91-deef34c93d4d\") " pod="calico-system/calico-node-xfh4h" Jul 11 00:17:46.038317 kubelet[2470]: I0711 00:17:46.038181 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/da1b4116-13b9-446c-be91-deef34c93d4d-var-lib-calico\") pod \"calico-node-xfh4h\" (UID: \"da1b4116-13b9-446c-be91-deef34c93d4d\") " pod="calico-system/calico-node-xfh4h" Jul 11 00:17:46.038490 kubelet[2470]: I0711 00:17:46.038194 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/da1b4116-13b9-446c-be91-deef34c93d4d-var-run-calico\") pod \"calico-node-xfh4h\" (UID: \"da1b4116-13b9-446c-be91-deef34c93d4d\") " pod="calico-system/calico-node-xfh4h" Jul 11 00:17:46.046881 containerd[1439]: time="2025-07-11T00:17:46.045938372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:17:46.047281 containerd[1439]: time="2025-07-11T00:17:46.047175755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:17:46.047378 containerd[1439]: time="2025-07-11T00:17:46.047268097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:17:46.047596 containerd[1439]: time="2025-07-11T00:17:46.047560889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:17:46.069587 systemd[1]: Started cri-containerd-fcb1c4eeab9b61483e4405e59674e16ea1a41f7c332be2e217e9a3affc570bb6.scope - libcontainer container fcb1c4eeab9b61483e4405e59674e16ea1a41f7c332be2e217e9a3affc570bb6. Jul 11 00:17:46.104287 containerd[1439]: time="2025-07-11T00:17:46.104236696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7dd447d85-4rcsj,Uid:8940f1c2-daaa-42ef-a8c9-6330a4191b87,Namespace:calico-system,Attempt:0,} returns sandbox id \"fcb1c4eeab9b61483e4405e59674e16ea1a41f7c332be2e217e9a3affc570bb6\"" Jul 11 00:17:46.105858 kubelet[2470]: E0711 00:17:46.105835 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:46.108831 containerd[1439]: time="2025-07-11T00:17:46.108784127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 11 00:17:46.151389 kubelet[2470]: E0711 00:17:46.151312 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.151389 kubelet[2470]: W0711 00:17:46.151337 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.151389 kubelet[2470]: E0711 00:17:46.151366 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.234966 kubelet[2470]: E0711 00:17:46.234723 2470 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t8xxb" podUID="4bb0797a-e6b9-46c0-ab2f-d796e4b11505" Jul 11 00:17:46.313829 containerd[1439]: time="2025-07-11T00:17:46.313725441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xfh4h,Uid:da1b4116-13b9-446c-be91-deef34c93d4d,Namespace:calico-system,Attempt:0,}" Jul 11 00:17:46.326440 kubelet[2470]: E0711 00:17:46.325710 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.326440 kubelet[2470]: W0711 00:17:46.325733 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.326440 kubelet[2470]: E0711 00:17:46.325753 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.327110 kubelet[2470]: E0711 00:17:46.326949 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.327110 kubelet[2470]: W0711 00:17:46.326966 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.327110 kubelet[2470]: E0711 00:17:46.327079 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.331195 kubelet[2470]: E0711 00:17:46.330887 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.331195 kubelet[2470]: W0711 00:17:46.330912 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.331195 kubelet[2470]: E0711 00:17:46.330928 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.331483 kubelet[2470]: E0711 00:17:46.331468 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.331558 kubelet[2470]: W0711 00:17:46.331545 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.331620 kubelet[2470]: E0711 00:17:46.331608 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.332140 kubelet[2470]: E0711 00:17:46.332018 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.332140 kubelet[2470]: W0711 00:17:46.332034 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.332140 kubelet[2470]: E0711 00:17:46.332046 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.333257 kubelet[2470]: E0711 00:17:46.333027 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.333394 kubelet[2470]: W0711 00:17:46.333360 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.334113 kubelet[2470]: E0711 00:17:46.334003 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.334619 kubelet[2470]: E0711 00:17:46.334500 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.335054 kubelet[2470]: W0711 00:17:46.334948 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.335054 kubelet[2470]: E0711 00:17:46.334973 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.335822 kubelet[2470]: E0711 00:17:46.335444 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.335822 kubelet[2470]: W0711 00:17:46.335458 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.335822 kubelet[2470]: E0711 00:17:46.335470 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.337434 kubelet[2470]: E0711 00:17:46.336996 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.337434 kubelet[2470]: W0711 00:17:46.337014 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.337434 kubelet[2470]: E0711 00:17:46.337026 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.339509 kubelet[2470]: E0711 00:17:46.339488 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.339814 kubelet[2470]: W0711 00:17:46.339666 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.339814 kubelet[2470]: E0711 00:17:46.339690 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.340797 kubelet[2470]: E0711 00:17:46.340335 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.340797 kubelet[2470]: W0711 00:17:46.340353 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.340797 kubelet[2470]: E0711 00:17:46.340366 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.341711 kubelet[2470]: E0711 00:17:46.341285 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.341711 kubelet[2470]: W0711 00:17:46.341306 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.341711 kubelet[2470]: E0711 00:17:46.341320 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.342583 kubelet[2470]: E0711 00:17:46.342208 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.343708 kubelet[2470]: W0711 00:17:46.343188 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.343708 kubelet[2470]: E0711 00:17:46.343219 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.345196 kubelet[2470]: E0711 00:17:46.344812 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.345196 kubelet[2470]: W0711 00:17:46.344828 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.345196 kubelet[2470]: E0711 00:17:46.344842 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.345687 kubelet[2470]: E0711 00:17:46.345669 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.346036 kubelet[2470]: W0711 00:17:46.345977 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.346308 kubelet[2470]: E0711 00:17:46.345998 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.346754 kubelet[2470]: E0711 00:17:46.346737 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.346905 kubelet[2470]: W0711 00:17:46.346829 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.346905 kubelet[2470]: E0711 00:17:46.346856 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.349761 kubelet[2470]: E0711 00:17:46.349494 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.349761 kubelet[2470]: W0711 00:17:46.349515 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.349761 kubelet[2470]: E0711 00:17:46.349529 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.350182 kubelet[2470]: E0711 00:17:46.350012 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.350182 kubelet[2470]: W0711 00:17:46.350029 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.350182 kubelet[2470]: E0711 00:17:46.350041 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.356537 kubelet[2470]: E0711 00:17:46.356507 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.356537 kubelet[2470]: W0711 00:17:46.356532 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.356719 kubelet[2470]: E0711 00:17:46.356552 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.357275 kubelet[2470]: E0711 00:17:46.357243 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.357275 kubelet[2470]: W0711 00:17:46.357262 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.357275 kubelet[2470]: E0711 00:17:46.357277 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.357827 kubelet[2470]: E0711 00:17:46.357626 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.357827 kubelet[2470]: W0711 00:17:46.357642 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.357827 kubelet[2470]: E0711 00:17:46.357656 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.357827 kubelet[2470]: I0711 00:17:46.357677 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzx6b\" (UniqueName: \"kubernetes.io/projected/4bb0797a-e6b9-46c0-ab2f-d796e4b11505-kube-api-access-qzx6b\") pod \"csi-node-driver-t8xxb\" (UID: \"4bb0797a-e6b9-46c0-ab2f-d796e4b11505\") " pod="calico-system/csi-node-driver-t8xxb" Jul 11 00:17:46.357990 kubelet[2470]: E0711 00:17:46.357876 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.357990 kubelet[2470]: W0711 00:17:46.357887 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.357990 kubelet[2470]: E0711 00:17:46.357896 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.357990 kubelet[2470]: I0711 00:17:46.357937 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4bb0797a-e6b9-46c0-ab2f-d796e4b11505-varrun\") pod \"csi-node-driver-t8xxb\" (UID: \"4bb0797a-e6b9-46c0-ab2f-d796e4b11505\") " pod="calico-system/csi-node-driver-t8xxb" Jul 11 00:17:46.358733 kubelet[2470]: E0711 00:17:46.358219 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.358733 kubelet[2470]: W0711 00:17:46.358303 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.358733 kubelet[2470]: E0711 00:17:46.358388 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.358733 kubelet[2470]: I0711 00:17:46.358418 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4bb0797a-e6b9-46c0-ab2f-d796e4b11505-registration-dir\") pod \"csi-node-driver-t8xxb\" (UID: \"4bb0797a-e6b9-46c0-ab2f-d796e4b11505\") " pod="calico-system/csi-node-driver-t8xxb" Jul 11 00:17:46.359295 kubelet[2470]: E0711 00:17:46.359265 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.359911 kubelet[2470]: W0711 00:17:46.359309 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.359911 kubelet[2470]: E0711 00:17:46.359329 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.359911 kubelet[2470]: I0711 00:17:46.359347 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4bb0797a-e6b9-46c0-ab2f-d796e4b11505-socket-dir\") pod \"csi-node-driver-t8xxb\" (UID: \"4bb0797a-e6b9-46c0-ab2f-d796e4b11505\") " pod="calico-system/csi-node-driver-t8xxb" Jul 11 00:17:46.359911 kubelet[2470]: E0711 00:17:46.359640 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.359911 kubelet[2470]: W0711 00:17:46.359655 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.359911 kubelet[2470]: E0711 00:17:46.359710 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.359911 kubelet[2470]: I0711 00:17:46.359749 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4bb0797a-e6b9-46c0-ab2f-d796e4b11505-kubelet-dir\") pod \"csi-node-driver-t8xxb\" (UID: \"4bb0797a-e6b9-46c0-ab2f-d796e4b11505\") " pod="calico-system/csi-node-driver-t8xxb" Jul 11 00:17:46.359911 kubelet[2470]: E0711 00:17:46.359858 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.360109 kubelet[2470]: W0711 00:17:46.359868 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.360109 kubelet[2470]: E0711 00:17:46.360026 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.360949 kubelet[2470]: E0711 00:17:46.360184 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.360949 kubelet[2470]: W0711 00:17:46.360199 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.360949 kubelet[2470]: E0711 00:17:46.360293 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.360949 kubelet[2470]: E0711 00:17:46.360398 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.360949 kubelet[2470]: W0711 00:17:46.360421 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.360949 kubelet[2470]: E0711 00:17:46.360624 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.360949 kubelet[2470]: W0711 00:17:46.360633 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.360949 kubelet[2470]: E0711 00:17:46.360644 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.360949 kubelet[2470]: E0711 00:17:46.360772 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.360949 kubelet[2470]: E0711 00:17:46.360824 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.361323 kubelet[2470]: W0711 00:17:46.360847 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.361323 kubelet[2470]: E0711 00:17:46.360886 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.361323 kubelet[2470]: E0711 00:17:46.361226 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.361323 kubelet[2470]: W0711 00:17:46.361258 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.361323 kubelet[2470]: E0711 00:17:46.361270 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.361776 kubelet[2470]: E0711 00:17:46.361542 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.361776 kubelet[2470]: W0711 00:17:46.361555 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.361776 kubelet[2470]: E0711 00:17:46.361565 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.361776 kubelet[2470]: E0711 00:17:46.361754 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.361776 kubelet[2470]: W0711 00:17:46.361763 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.361776 kubelet[2470]: E0711 00:17:46.361771 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.361776 kubelet[2470]: E0711 00:17:46.361965 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.361776 kubelet[2470]: W0711 00:17:46.361974 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.361776 kubelet[2470]: E0711 00:17:46.361983 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.362444 kubelet[2470]: E0711 00:17:46.362134 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.362444 kubelet[2470]: W0711 00:17:46.362141 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.362444 kubelet[2470]: E0711 00:17:46.362149 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.370478 containerd[1439]: time="2025-07-11T00:17:46.369932374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:17:46.370478 containerd[1439]: time="2025-07-11T00:17:46.370418172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:17:46.370478 containerd[1439]: time="2025-07-11T00:17:46.370433256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:17:46.370478 containerd[1439]: time="2025-07-11T00:17:46.370523118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:17:46.392602 systemd[1]: Started cri-containerd-f586aa96a02c326e09191a701c30b1b7390ea2746add4e2d6ec4b95a4d76904f.scope - libcontainer container f586aa96a02c326e09191a701c30b1b7390ea2746add4e2d6ec4b95a4d76904f. Jul 11 00:17:46.420826 containerd[1439]: time="2025-07-11T00:17:46.420725744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xfh4h,Uid:da1b4116-13b9-446c-be91-deef34c93d4d,Namespace:calico-system,Attempt:0,} returns sandbox id \"f586aa96a02c326e09191a701c30b1b7390ea2746add4e2d6ec4b95a4d76904f\"" Jul 11 00:17:46.461101 kubelet[2470]: E0711 00:17:46.461073 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.461101 kubelet[2470]: W0711 00:17:46.461096 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.461339 kubelet[2470]: E0711 00:17:46.461115 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.461385 kubelet[2470]: E0711 00:17:46.461365 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.461385 kubelet[2470]: W0711 00:17:46.461374 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.461482 kubelet[2470]: E0711 00:17:46.461393 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.461684 kubelet[2470]: E0711 00:17:46.461666 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.461718 kubelet[2470]: W0711 00:17:46.461685 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.461718 kubelet[2470]: E0711 00:17:46.461705 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.461897 kubelet[2470]: E0711 00:17:46.461886 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.461930 kubelet[2470]: W0711 00:17:46.461897 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.461930 kubelet[2470]: E0711 00:17:46.461908 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.462073 kubelet[2470]: E0711 00:17:46.462062 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.462104 kubelet[2470]: W0711 00:17:46.462073 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.462104 kubelet[2470]: E0711 00:17:46.462085 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.462291 kubelet[2470]: E0711 00:17:46.462278 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.462320 kubelet[2470]: W0711 00:17:46.462291 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.462320 kubelet[2470]: E0711 00:17:46.462305 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.462532 kubelet[2470]: E0711 00:17:46.462518 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.462532 kubelet[2470]: W0711 00:17:46.462531 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.462607 kubelet[2470]: E0711 00:17:46.462544 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.462731 kubelet[2470]: E0711 00:17:46.462719 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.462731 kubelet[2470]: W0711 00:17:46.462731 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.462795 kubelet[2470]: E0711 00:17:46.462775 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.462905 kubelet[2470]: E0711 00:17:46.462893 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.462905 kubelet[2470]: W0711 00:17:46.462903 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.463015 kubelet[2470]: E0711 00:17:46.462985 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.463108 kubelet[2470]: E0711 00:17:46.463092 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.463108 kubelet[2470]: W0711 00:17:46.463105 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.463203 kubelet[2470]: E0711 00:17:46.463125 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.464463 kubelet[2470]: E0711 00:17:46.464448 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.464463 kubelet[2470]: W0711 00:17:46.464462 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.464534 kubelet[2470]: E0711 00:17:46.464478 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.464692 kubelet[2470]: E0711 00:17:46.464679 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.464692 kubelet[2470]: W0711 00:17:46.464691 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.464759 kubelet[2470]: E0711 00:17:46.464730 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.464870 kubelet[2470]: E0711 00:17:46.464857 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.464870 kubelet[2470]: W0711 00:17:46.464869 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.464928 kubelet[2470]: E0711 00:17:46.464902 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.465054 kubelet[2470]: E0711 00:17:46.465041 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.465084 kubelet[2470]: W0711 00:17:46.465053 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.465166 kubelet[2470]: E0711 00:17:46.465083 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.465237 kubelet[2470]: E0711 00:17:46.465224 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.465237 kubelet[2470]: W0711 00:17:46.465236 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.465305 kubelet[2470]: E0711 00:17:46.465261 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.467469 kubelet[2470]: E0711 00:17:46.465429 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.467469 kubelet[2470]: W0711 00:17:46.465441 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.467469 kubelet[2470]: E0711 00:17:46.465456 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.467469 kubelet[2470]: E0711 00:17:46.465658 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.467469 kubelet[2470]: W0711 00:17:46.465668 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.467469 kubelet[2470]: E0711 00:17:46.465683 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.467469 kubelet[2470]: E0711 00:17:46.465839 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.467469 kubelet[2470]: W0711 00:17:46.465847 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.467469 kubelet[2470]: E0711 00:17:46.465869 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.467469 kubelet[2470]: E0711 00:17:46.466000 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.467696 kubelet[2470]: W0711 00:17:46.466007 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.467696 kubelet[2470]: E0711 00:17:46.466020 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.467696 kubelet[2470]: E0711 00:17:46.466178 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.467696 kubelet[2470]: W0711 00:17:46.466186 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.467696 kubelet[2470]: E0711 00:17:46.466199 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.467696 kubelet[2470]: E0711 00:17:46.466340 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.467696 kubelet[2470]: W0711 00:17:46.466348 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.467696 kubelet[2470]: E0711 00:17:46.466361 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.467696 kubelet[2470]: E0711 00:17:46.466500 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.467696 kubelet[2470]: W0711 00:17:46.466507 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.467896 kubelet[2470]: E0711 00:17:46.466528 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.467896 kubelet[2470]: E0711 00:17:46.466638 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.467896 kubelet[2470]: W0711 00:17:46.466645 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.467896 kubelet[2470]: E0711 00:17:46.466656 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.467896 kubelet[2470]: E0711 00:17:46.466974 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.467896 kubelet[2470]: W0711 00:17:46.466989 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.467896 kubelet[2470]: E0711 00:17:46.467001 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.469014 kubelet[2470]: E0711 00:17:46.468991 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.469014 kubelet[2470]: W0711 00:17:46.469010 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.469090 kubelet[2470]: E0711 00:17:46.469022 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:46.477887 kubelet[2470]: E0711 00:17:46.477817 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:46.477887 kubelet[2470]: W0711 00:17:46.477838 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:46.477887 kubelet[2470]: E0711 00:17:46.477855 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:47.084945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2650057373.mount: Deactivated successfully. Jul 11 00:17:47.931675 containerd[1439]: time="2025-07-11T00:17:47.931615019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:47.932320 containerd[1439]: time="2025-07-11T00:17:47.932285456Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 11 00:17:47.933453 containerd[1439]: time="2025-07-11T00:17:47.933383793Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:47.937850 containerd[1439]: time="2025-07-11T00:17:47.937804869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:47.938510 containerd[1439]: time="2025-07-11T00:17:47.938462743Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.829640926s" Jul 11 00:17:47.938564 containerd[1439]: time="2025-07-11T00:17:47.938509434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 11 00:17:47.940376 containerd[1439]: time="2025-07-11T00:17:47.939933528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 11 00:17:47.954079 containerd[1439]: time="2025-07-11T00:17:47.954029590Z" level=info msg="CreateContainer within sandbox \"fcb1c4eeab9b61483e4405e59674e16ea1a41f7c332be2e217e9a3affc570bb6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 11 00:17:47.969228 containerd[1439]: time="2025-07-11T00:17:47.969172697Z" level=info msg="CreateContainer within sandbox \"fcb1c4eeab9b61483e4405e59674e16ea1a41f7c332be2e217e9a3affc570bb6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d9b87be7d92417da9f53d7c041c210d499ca3f8da7d126ad2378253cc0213473\"" Jul 11 00:17:47.969778 containerd[1439]: time="2025-07-11T00:17:47.969705062Z" level=info msg="StartContainer for \"d9b87be7d92417da9f53d7c041c210d499ca3f8da7d126ad2378253cc0213473\"" Jul 11 00:17:48.000630 systemd[1]: Started cri-containerd-d9b87be7d92417da9f53d7c041c210d499ca3f8da7d126ad2378253cc0213473.scope - libcontainer container d9b87be7d92417da9f53d7c041c210d499ca3f8da7d126ad2378253cc0213473. Jul 11 00:17:48.045892 containerd[1439]: time="2025-07-11T00:17:48.045849992Z" level=info msg="StartContainer for \"d9b87be7d92417da9f53d7c041c210d499ca3f8da7d126ad2378253cc0213473\" returns successfully" Jul 11 00:17:48.528312 kubelet[2470]: E0711 00:17:48.528239 2470 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t8xxb" podUID="4bb0797a-e6b9-46c0-ab2f-d796e4b11505" Jul 11 00:17:48.616851 kubelet[2470]: E0711 00:17:48.616822 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:48.632937 kubelet[2470]: I0711 00:17:48.632109 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7dd447d85-4rcsj" podStartSLOduration=1.800220415 podStartE2EDuration="3.632089917s" podCreationTimestamp="2025-07-11 00:17:45 +0000 UTC" firstStartedPulling="2025-07-11 00:17:46.107560708 +0000 UTC m=+20.667280240" lastFinishedPulling="2025-07-11 00:17:47.93943021 +0000 UTC m=+22.499149742" observedRunningTime="2025-07-11 00:17:48.631945285 +0000 UTC m=+23.191664817" watchObservedRunningTime="2025-07-11 00:17:48.632089917 +0000 UTC m=+23.191809449" Jul 11 00:17:48.677132 kubelet[2470]: E0711 00:17:48.677001 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.677132 kubelet[2470]: W0711 00:17:48.677033 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.677132 kubelet[2470]: E0711 00:17:48.677054 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.677505 kubelet[2470]: E0711 00:17:48.677371 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.677505 kubelet[2470]: W0711 00:17:48.677384 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.677505 kubelet[2470]: E0711 00:17:48.677401 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.677719 kubelet[2470]: E0711 00:17:48.677704 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.677876 kubelet[2470]: W0711 00:17:48.677771 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.677876 kubelet[2470]: E0711 00:17:48.677787 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.678036 kubelet[2470]: E0711 00:17:48.678017 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.678181 kubelet[2470]: W0711 00:17:48.678104 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.678181 kubelet[2470]: E0711 00:17:48.678131 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.678571 kubelet[2470]: E0711 00:17:48.678550 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.678758 kubelet[2470]: W0711 00:17:48.678660 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.678758 kubelet[2470]: E0711 00:17:48.678678 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.678891 kubelet[2470]: E0711 00:17:48.678876 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.678953 kubelet[2470]: W0711 00:17:48.678942 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.679122 kubelet[2470]: E0711 00:17:48.679016 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.679389 kubelet[2470]: E0711 00:17:48.679269 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.679389 kubelet[2470]: W0711 00:17:48.679283 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.679389 kubelet[2470]: E0711 00:17:48.679293 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.679625 kubelet[2470]: E0711 00:17:48.679566 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.679625 kubelet[2470]: W0711 00:17:48.679580 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.679625 kubelet[2470]: E0711 00:17:48.679591 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.679855 kubelet[2470]: E0711 00:17:48.679838 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.679896 kubelet[2470]: W0711 00:17:48.679857 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.679896 kubelet[2470]: E0711 00:17:48.679872 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.680019 kubelet[2470]: E0711 00:17:48.680008 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.680019 kubelet[2470]: W0711 00:17:48.680018 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.680086 kubelet[2470]: E0711 00:17:48.680025 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.680205 kubelet[2470]: E0711 00:17:48.680195 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.680205 kubelet[2470]: W0711 00:17:48.680205 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.680277 kubelet[2470]: E0711 00:17:48.680213 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.680387 kubelet[2470]: E0711 00:17:48.680376 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.680387 kubelet[2470]: W0711 00:17:48.680387 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.680466 kubelet[2470]: E0711 00:17:48.680394 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.680644 kubelet[2470]: E0711 00:17:48.680568 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.680644 kubelet[2470]: W0711 00:17:48.680581 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.680644 kubelet[2470]: E0711 00:17:48.680590 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.680747 kubelet[2470]: E0711 00:17:48.680738 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.680781 kubelet[2470]: W0711 00:17:48.680750 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.680781 kubelet[2470]: E0711 00:17:48.680758 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.680928 kubelet[2470]: E0711 00:17:48.680916 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.680928 kubelet[2470]: W0711 00:17:48.680928 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.680986 kubelet[2470]: E0711 00:17:48.680936 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.681262 kubelet[2470]: E0711 00:17:48.681245 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.681449 kubelet[2470]: W0711 00:17:48.681343 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.681449 kubelet[2470]: E0711 00:17:48.681362 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.685674 kubelet[2470]: E0711 00:17:48.685647 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.685674 kubelet[2470]: W0711 00:17:48.685669 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.685778 kubelet[2470]: E0711 00:17:48.685686 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.686331 kubelet[2470]: E0711 00:17:48.686148 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.686331 kubelet[2470]: W0711 00:17:48.686166 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.686331 kubelet[2470]: E0711 00:17:48.686182 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.687180 kubelet[2470]: E0711 00:17:48.687164 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.687573 kubelet[2470]: W0711 00:17:48.687544 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.687774 kubelet[2470]: E0711 00:17:48.687668 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.688084 kubelet[2470]: E0711 00:17:48.687988 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.688084 kubelet[2470]: W0711 00:17:48.688003 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.688256 kubelet[2470]: E0711 00:17:48.688204 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.688382 kubelet[2470]: E0711 00:17:48.688370 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.688503 kubelet[2470]: W0711 00:17:48.688450 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.688685 kubelet[2470]: E0711 00:17:48.688601 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.689021 kubelet[2470]: E0711 00:17:48.689006 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.689203 kubelet[2470]: W0711 00:17:48.689086 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.689203 kubelet[2470]: E0711 00:17:48.689155 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.689380 kubelet[2470]: E0711 00:17:48.689356 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.689684 kubelet[2470]: W0711 00:17:48.689432 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.689684 kubelet[2470]: E0711 00:17:48.689454 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.690083 kubelet[2470]: E0711 00:17:48.689890 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.690083 kubelet[2470]: W0711 00:17:48.689907 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.690083 kubelet[2470]: E0711 00:17:48.689923 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.690213 kubelet[2470]: E0711 00:17:48.690157 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.690213 kubelet[2470]: W0711 00:17:48.690167 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.690213 kubelet[2470]: E0711 00:17:48.690179 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.690481 kubelet[2470]: E0711 00:17:48.690457 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.690481 kubelet[2470]: W0711 00:17:48.690473 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.690575 kubelet[2470]: E0711 00:17:48.690547 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.690789 kubelet[2470]: E0711 00:17:48.690698 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.690789 kubelet[2470]: W0711 00:17:48.690711 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.690789 kubelet[2470]: E0711 00:17:48.690725 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.691312 kubelet[2470]: E0711 00:17:48.691278 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.691312 kubelet[2470]: W0711 00:17:48.691296 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.691615 kubelet[2470]: E0711 00:17:48.691469 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.691754 kubelet[2470]: E0711 00:17:48.691737 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.691828 kubelet[2470]: W0711 00:17:48.691816 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.691940 kubelet[2470]: E0711 00:17:48.691915 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.692492 kubelet[2470]: E0711 00:17:48.692253 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.692492 kubelet[2470]: W0711 00:17:48.692267 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.692492 kubelet[2470]: E0711 00:17:48.692282 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.693448 kubelet[2470]: E0711 00:17:48.693082 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.693448 kubelet[2470]: W0711 00:17:48.693101 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.693448 kubelet[2470]: E0711 00:17:48.693121 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.693657 kubelet[2470]: E0711 00:17:48.693628 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.693657 kubelet[2470]: W0711 00:17:48.693654 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.693740 kubelet[2470]: E0711 00:17:48.693670 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.694092 kubelet[2470]: E0711 00:17:48.694065 2470 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:17:48.694145 kubelet[2470]: W0711 00:17:48.694104 2470 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:17:48.694145 kubelet[2470]: E0711 00:17:48.694117 2470 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:17:48.983476 containerd[1439]: time="2025-07-11T00:17:48.983337889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:48.984443 containerd[1439]: time="2025-07-11T00:17:48.984066693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 11 00:17:48.987291 containerd[1439]: time="2025-07-11T00:17:48.984830784Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:48.988155 containerd[1439]: time="2025-07-11T00:17:48.988115403Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:48.988965 containerd[1439]: time="2025-07-11T00:17:48.988931066Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.04895937s" Jul 11 00:17:48.989029 containerd[1439]: time="2025-07-11T00:17:48.988964074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 11 00:17:48.990898 containerd[1439]: time="2025-07-11T00:17:48.990867942Z" level=info msg="CreateContainer within sandbox \"f586aa96a02c326e09191a701c30b1b7390ea2746add4e2d6ec4b95a4d76904f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 11 00:17:49.003519 containerd[1439]: time="2025-07-11T00:17:49.003478270Z" level=info msg="CreateContainer within sandbox \"f586aa96a02c326e09191a701c30b1b7390ea2746add4e2d6ec4b95a4d76904f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b2ed418f65fc36d5b23d9fdb066baa11bae33a48f1ae38e7a76c883c26502923\"" Jul 11 00:17:49.004096 containerd[1439]: time="2025-07-11T00:17:49.004067197Z" level=info msg="StartContainer for \"b2ed418f65fc36d5b23d9fdb066baa11bae33a48f1ae38e7a76c883c26502923\"" Jul 11 00:17:49.036585 systemd[1]: Started cri-containerd-b2ed418f65fc36d5b23d9fdb066baa11bae33a48f1ae38e7a76c883c26502923.scope - libcontainer container b2ed418f65fc36d5b23d9fdb066baa11bae33a48f1ae38e7a76c883c26502923. Jul 11 00:17:49.065304 containerd[1439]: time="2025-07-11T00:17:49.065194760Z" level=info msg="StartContainer for \"b2ed418f65fc36d5b23d9fdb066baa11bae33a48f1ae38e7a76c883c26502923\" returns successfully" Jul 11 00:17:49.110198 systemd[1]: cri-containerd-b2ed418f65fc36d5b23d9fdb066baa11bae33a48f1ae38e7a76c883c26502923.scope: Deactivated successfully. Jul 11 00:17:49.136266 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2ed418f65fc36d5b23d9fdb066baa11bae33a48f1ae38e7a76c883c26502923-rootfs.mount: Deactivated successfully. Jul 11 00:17:49.151106 containerd[1439]: time="2025-07-11T00:17:49.147172306Z" level=info msg="shim disconnected" id=b2ed418f65fc36d5b23d9fdb066baa11bae33a48f1ae38e7a76c883c26502923 namespace=k8s.io Jul 11 00:17:49.151106 containerd[1439]: time="2025-07-11T00:17:49.151099434Z" level=warning msg="cleaning up after shim disconnected" id=b2ed418f65fc36d5b23d9fdb066baa11bae33a48f1ae38e7a76c883c26502923 namespace=k8s.io Jul 11 00:17:49.151106 containerd[1439]: time="2025-07-11T00:17:49.151113517Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:17:49.620978 kubelet[2470]: I0711 00:17:49.620948 2470 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:17:49.621921 kubelet[2470]: E0711 00:17:49.621864 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:49.624872 containerd[1439]: time="2025-07-11T00:17:49.624707047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 11 00:17:50.529528 kubelet[2470]: E0711 00:17:50.528540 2470 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t8xxb" podUID="4bb0797a-e6b9-46c0-ab2f-d796e4b11505" Jul 11 00:17:51.560783 containerd[1439]: time="2025-07-11T00:17:51.560735945Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 11 00:17:51.564431 containerd[1439]: time="2025-07-11T00:17:51.561383434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:51.564431 containerd[1439]: time="2025-07-11T00:17:51.563694976Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:51.568268 containerd[1439]: time="2025-07-11T00:17:51.568212759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:51.569254 containerd[1439]: time="2025-07-11T00:17:51.569082973Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 1.944325276s" Jul 11 00:17:51.569254 containerd[1439]: time="2025-07-11T00:17:51.569127662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 11 00:17:51.572988 containerd[1439]: time="2025-07-11T00:17:51.572874331Z" level=info msg="CreateContainer within sandbox \"f586aa96a02c326e09191a701c30b1b7390ea2746add4e2d6ec4b95a4d76904f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 11 00:17:51.585863 containerd[1439]: time="2025-07-11T00:17:51.585809357Z" level=info msg="CreateContainer within sandbox \"f586aa96a02c326e09191a701c30b1b7390ea2746add4e2d6ec4b95a4d76904f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cc38b1c0e6ddd0fb54254a2447b2343474ff90ff2955d66763fab0b7674b98ec\"" Jul 11 00:17:51.587314 containerd[1439]: time="2025-07-11T00:17:51.587271530Z" level=info msg="StartContainer for \"cc38b1c0e6ddd0fb54254a2447b2343474ff90ff2955d66763fab0b7674b98ec\"" Jul 11 00:17:51.626636 systemd[1]: Started cri-containerd-cc38b1c0e6ddd0fb54254a2447b2343474ff90ff2955d66763fab0b7674b98ec.scope - libcontainer container cc38b1c0e6ddd0fb54254a2447b2343474ff90ff2955d66763fab0b7674b98ec. Jul 11 00:17:51.843447 kubelet[2470]: I0711 00:17:51.843024 2470 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:17:51.843447 kubelet[2470]: E0711 00:17:51.843361 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:52.040218 containerd[1439]: time="2025-07-11T00:17:52.040160949Z" level=info msg="StartContainer for \"cc38b1c0e6ddd0fb54254a2447b2343474ff90ff2955d66763fab0b7674b98ec\" returns successfully" Jul 11 00:17:52.429450 systemd[1]: cri-containerd-cc38b1c0e6ddd0fb54254a2447b2343474ff90ff2955d66763fab0b7674b98ec.scope: Deactivated successfully. Jul 11 00:17:52.450888 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc38b1c0e6ddd0fb54254a2447b2343474ff90ff2955d66763fab0b7674b98ec-rootfs.mount: Deactivated successfully. Jul 11 00:17:52.455144 containerd[1439]: time="2025-07-11T00:17:52.455078915Z" level=info msg="shim disconnected" id=cc38b1c0e6ddd0fb54254a2447b2343474ff90ff2955d66763fab0b7674b98ec namespace=k8s.io Jul 11 00:17:52.455144 containerd[1439]: time="2025-07-11T00:17:52.455133245Z" level=warning msg="cleaning up after shim disconnected" id=cc38b1c0e6ddd0fb54254a2447b2343474ff90ff2955d66763fab0b7674b98ec namespace=k8s.io Jul 11 00:17:52.455144 containerd[1439]: time="2025-07-11T00:17:52.455141327Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:17:52.472253 kubelet[2470]: I0711 00:17:52.472218 2470 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 11 00:17:52.512671 kubelet[2470]: I0711 00:17:52.512587 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6895268f-5207-4d25-89a2-65b99ac04608-calico-apiserver-certs\") pod \"calico-apiserver-796d49478c-7n5wn\" (UID: \"6895268f-5207-4d25-89a2-65b99ac04608\") " pod="calico-apiserver/calico-apiserver-796d49478c-7n5wn" Jul 11 00:17:52.514110 kubelet[2470]: I0711 00:17:52.513179 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dt2p\" (UniqueName: \"kubernetes.io/projected/9904d86a-2797-43dd-8a39-c9306c873001-kube-api-access-7dt2p\") pod \"calico-kube-controllers-7667647d94-b4l6t\" (UID: \"9904d86a-2797-43dd-8a39-c9306c873001\") " pod="calico-system/calico-kube-controllers-7667647d94-b4l6t" Jul 11 00:17:52.514110 kubelet[2470]: I0711 00:17:52.513215 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzgqz\" (UniqueName: \"kubernetes.io/projected/6895268f-5207-4d25-89a2-65b99ac04608-kube-api-access-dzgqz\") pod \"calico-apiserver-796d49478c-7n5wn\" (UID: \"6895268f-5207-4d25-89a2-65b99ac04608\") " pod="calico-apiserver/calico-apiserver-796d49478c-7n5wn" Jul 11 00:17:52.514110 kubelet[2470]: I0711 00:17:52.513240 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1aa6bb20-6a01-4656-8af5-3bf6153d0dfe-config-volume\") pod \"coredns-668d6bf9bc-cvpp7\" (UID: \"1aa6bb20-6a01-4656-8af5-3bf6153d0dfe\") " pod="kube-system/coredns-668d6bf9bc-cvpp7" Jul 11 00:17:52.514110 kubelet[2470]: I0711 00:17:52.513265 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mwhf\" (UniqueName: \"kubernetes.io/projected/1aa6bb20-6a01-4656-8af5-3bf6153d0dfe-kube-api-access-9mwhf\") pod \"coredns-668d6bf9bc-cvpp7\" (UID: \"1aa6bb20-6a01-4656-8af5-3bf6153d0dfe\") " pod="kube-system/coredns-668d6bf9bc-cvpp7" Jul 11 00:17:52.514110 kubelet[2470]: I0711 00:17:52.513296 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9904d86a-2797-43dd-8a39-c9306c873001-tigera-ca-bundle\") pod \"calico-kube-controllers-7667647d94-b4l6t\" (UID: \"9904d86a-2797-43dd-8a39-c9306c873001\") " pod="calico-system/calico-kube-controllers-7667647d94-b4l6t" Jul 11 00:17:52.522929 systemd[1]: Created slice kubepods-besteffort-pod9904d86a_2797_43dd_8a39_c9306c873001.slice - libcontainer container kubepods-besteffort-pod9904d86a_2797_43dd_8a39_c9306c873001.slice. Jul 11 00:17:52.532047 systemd[1]: Created slice kubepods-besteffort-pod6895268f_5207_4d25_89a2_65b99ac04608.slice - libcontainer container kubepods-besteffort-pod6895268f_5207_4d25_89a2_65b99ac04608.slice. Jul 11 00:17:52.540021 systemd[1]: Created slice kubepods-besteffort-podfe2fa29d_40a7_4cfb_b752_279a23adcd32.slice - libcontainer container kubepods-besteffort-podfe2fa29d_40a7_4cfb_b752_279a23adcd32.slice. Jul 11 00:17:52.546183 systemd[1]: Created slice kubepods-besteffort-pod312327e1_547f_48cf_897a_ee24ca2c1ae6.slice - libcontainer container kubepods-besteffort-pod312327e1_547f_48cf_897a_ee24ca2c1ae6.slice. Jul 11 00:17:52.552797 systemd[1]: Created slice kubepods-burstable-pod1aa6bb20_6a01_4656_8af5_3bf6153d0dfe.slice - libcontainer container kubepods-burstable-pod1aa6bb20_6a01_4656_8af5_3bf6153d0dfe.slice. Jul 11 00:17:52.561011 systemd[1]: Created slice kubepods-burstable-pode1f2b193_85e7_4131_903d_0d058505c956.slice - libcontainer container kubepods-burstable-pode1f2b193_85e7_4131_903d_0d058505c956.slice. Jul 11 00:17:52.567183 systemd[1]: Created slice kubepods-besteffort-podebfb1a06_1477_48f5_805f_9808c5339795.slice - libcontainer container kubepods-besteffort-podebfb1a06_1477_48f5_805f_9808c5339795.slice. Jul 11 00:17:52.574031 systemd[1]: Created slice kubepods-besteffort-pod4bb0797a_e6b9_46c0_ab2f_d796e4b11505.slice - libcontainer container kubepods-besteffort-pod4bb0797a_e6b9_46c0_ab2f_d796e4b11505.slice. Jul 11 00:17:52.576729 containerd[1439]: time="2025-07-11T00:17:52.576686980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t8xxb,Uid:4bb0797a-e6b9-46c0-ab2f-d796e4b11505,Namespace:calico-system,Attempt:0,}" Jul 11 00:17:52.620097 kubelet[2470]: I0711 00:17:52.615621 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebfb1a06-1477-48f5-805f-9808c5339795-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-8rxq7\" (UID: \"ebfb1a06-1477-48f5-805f-9808c5339795\") " pod="calico-system/goldmane-768f4c5c69-8rxq7" Jul 11 00:17:52.620097 kubelet[2470]: I0711 00:17:52.615708 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebfb1a06-1477-48f5-805f-9808c5339795-config\") pod \"goldmane-768f4c5c69-8rxq7\" (UID: \"ebfb1a06-1477-48f5-805f-9808c5339795\") " pod="calico-system/goldmane-768f4c5c69-8rxq7" Jul 11 00:17:52.620097 kubelet[2470]: I0711 00:17:52.615728 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdztp\" (UniqueName: \"kubernetes.io/projected/312327e1-547f-48cf-897a-ee24ca2c1ae6-kube-api-access-pdztp\") pod \"whisker-8685df7cdf-pmpnr\" (UID: \"312327e1-547f-48cf-897a-ee24ca2c1ae6\") " pod="calico-system/whisker-8685df7cdf-pmpnr" Jul 11 00:17:52.620097 kubelet[2470]: I0711 00:17:52.615747 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fe2fa29d-40a7-4cfb-b752-279a23adcd32-calico-apiserver-certs\") pod \"calico-apiserver-796d49478c-6j58z\" (UID: \"fe2fa29d-40a7-4cfb-b752-279a23adcd32\") " pod="calico-apiserver/calico-apiserver-796d49478c-6j58z" Jul 11 00:17:52.620097 kubelet[2470]: I0711 00:17:52.615769 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ebfb1a06-1477-48f5-805f-9808c5339795-goldmane-key-pair\") pod \"goldmane-768f4c5c69-8rxq7\" (UID: \"ebfb1a06-1477-48f5-805f-9808c5339795\") " pod="calico-system/goldmane-768f4c5c69-8rxq7" Jul 11 00:17:52.621446 kubelet[2470]: I0711 00:17:52.615788 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhsqn\" (UniqueName: \"kubernetes.io/projected/ebfb1a06-1477-48f5-805f-9808c5339795-kube-api-access-nhsqn\") pod \"goldmane-768f4c5c69-8rxq7\" (UID: \"ebfb1a06-1477-48f5-805f-9808c5339795\") " pod="calico-system/goldmane-768f4c5c69-8rxq7" Jul 11 00:17:52.621446 kubelet[2470]: I0711 00:17:52.615809 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1f2b193-85e7-4131-903d-0d058505c956-config-volume\") pod \"coredns-668d6bf9bc-tbbr8\" (UID: \"e1f2b193-85e7-4131-903d-0d058505c956\") " pod="kube-system/coredns-668d6bf9bc-tbbr8" Jul 11 00:17:52.621446 kubelet[2470]: I0711 00:17:52.615825 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfwbm\" (UniqueName: \"kubernetes.io/projected/e1f2b193-85e7-4131-903d-0d058505c956-kube-api-access-qfwbm\") pod \"coredns-668d6bf9bc-tbbr8\" (UID: \"e1f2b193-85e7-4131-903d-0d058505c956\") " pod="kube-system/coredns-668d6bf9bc-tbbr8" Jul 11 00:17:52.621446 kubelet[2470]: I0711 00:17:52.615852 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/312327e1-547f-48cf-897a-ee24ca2c1ae6-whisker-backend-key-pair\") pod \"whisker-8685df7cdf-pmpnr\" (UID: \"312327e1-547f-48cf-897a-ee24ca2c1ae6\") " pod="calico-system/whisker-8685df7cdf-pmpnr" Jul 11 00:17:52.621446 kubelet[2470]: I0711 00:17:52.615868 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/312327e1-547f-48cf-897a-ee24ca2c1ae6-whisker-ca-bundle\") pod \"whisker-8685df7cdf-pmpnr\" (UID: \"312327e1-547f-48cf-897a-ee24ca2c1ae6\") " pod="calico-system/whisker-8685df7cdf-pmpnr" Jul 11 00:17:52.621624 kubelet[2470]: I0711 00:17:52.615883 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llzsr\" (UniqueName: \"kubernetes.io/projected/fe2fa29d-40a7-4cfb-b752-279a23adcd32-kube-api-access-llzsr\") pod \"calico-apiserver-796d49478c-6j58z\" (UID: \"fe2fa29d-40a7-4cfb-b752-279a23adcd32\") " pod="calico-apiserver/calico-apiserver-796d49478c-6j58z" Jul 11 00:17:52.638894 kubelet[2470]: E0711 00:17:52.638316 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:52.646464 containerd[1439]: time="2025-07-11T00:17:52.646141599Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 11 00:17:52.779458 containerd[1439]: time="2025-07-11T00:17:52.779391387Z" level=error msg="Failed to destroy network for sandbox \"2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:52.779778 containerd[1439]: time="2025-07-11T00:17:52.779751016Z" level=error msg="encountered an error cleaning up failed sandbox \"2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:52.779824 containerd[1439]: time="2025-07-11T00:17:52.779804707Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t8xxb,Uid:4bb0797a-e6b9-46c0-ab2f-d796e4b11505,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:52.783171 kubelet[2470]: E0711 00:17:52.783114 2470 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:52.785414 kubelet[2470]: E0711 00:17:52.785369 2470 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t8xxb" Jul 11 00:17:52.785494 kubelet[2470]: E0711 00:17:52.785438 2470 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-t8xxb" Jul 11 00:17:52.785523 kubelet[2470]: E0711 00:17:52.785498 2470 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-t8xxb_calico-system(4bb0797a-e6b9-46c0-ab2f-d796e4b11505)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-t8xxb_calico-system(4bb0797a-e6b9-46c0-ab2f-d796e4b11505)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t8xxb" podUID="4bb0797a-e6b9-46c0-ab2f-d796e4b11505" Jul 11 00:17:52.828047 containerd[1439]: time="2025-07-11T00:17:52.828001631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7667647d94-b4l6t,Uid:9904d86a-2797-43dd-8a39-c9306c873001,Namespace:calico-system,Attempt:0,}" Jul 11 00:17:52.838057 containerd[1439]: time="2025-07-11T00:17:52.837732746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-796d49478c-7n5wn,Uid:6895268f-5207-4d25-89a2-65b99ac04608,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:17:52.844254 containerd[1439]: time="2025-07-11T00:17:52.844211073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-796d49478c-6j58z,Uid:fe2fa29d-40a7-4cfb-b752-279a23adcd32,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:17:52.850938 containerd[1439]: time="2025-07-11T00:17:52.850901522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8685df7cdf-pmpnr,Uid:312327e1-547f-48cf-897a-ee24ca2c1ae6,Namespace:calico-system,Attempt:0,}" Jul 11 00:17:52.856561 kubelet[2470]: E0711 00:17:52.856530 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:52.859393 containerd[1439]: time="2025-07-11T00:17:52.859185998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cvpp7,Uid:1aa6bb20-6a01-4656-8af5-3bf6153d0dfe,Namespace:kube-system,Attempt:0,}" Jul 11 00:17:52.865354 kubelet[2470]: E0711 00:17:52.865288 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:17:52.866153 containerd[1439]: time="2025-07-11T00:17:52.866117213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tbbr8,Uid:e1f2b193-85e7-4131-903d-0d058505c956,Namespace:kube-system,Attempt:0,}" Jul 11 00:17:52.873087 containerd[1439]: time="2025-07-11T00:17:52.872020430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-8rxq7,Uid:ebfb1a06-1477-48f5-805f-9808c5339795,Namespace:calico-system,Attempt:0,}" Jul 11 00:17:52.923925 containerd[1439]: time="2025-07-11T00:17:52.923874019Z" level=error msg="Failed to destroy network for sandbox \"1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:52.924244 containerd[1439]: time="2025-07-11T00:17:52.924212484Z" level=error msg="encountered an error cleaning up failed sandbox \"1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:52.924286 containerd[1439]: time="2025-07-11T00:17:52.924268775Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7667647d94-b4l6t,Uid:9904d86a-2797-43dd-8a39-c9306c873001,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:52.924620 kubelet[2470]: E0711 00:17:52.924566 2470 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:52.924888 kubelet[2470]: E0711 00:17:52.924867 2470 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7667647d94-b4l6t" Jul 11 00:17:52.925075 kubelet[2470]: E0711 00:17:52.925058 2470 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7667647d94-b4l6t" Jul 11 00:17:52.925216 kubelet[2470]: E0711 00:17:52.925183 2470 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7667647d94-b4l6t_calico-system(9904d86a-2797-43dd-8a39-c9306c873001)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7667647d94-b4l6t_calico-system(9904d86a-2797-43dd-8a39-c9306c873001)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7667647d94-b4l6t" podUID="9904d86a-2797-43dd-8a39-c9306c873001" Jul 11 00:17:52.935113 containerd[1439]: time="2025-07-11T00:17:52.935064095Z" level=error msg="Failed to destroy network for sandbox \"9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:52.935614 containerd[1439]: time="2025-07-11T00:17:52.935576913Z" level=error msg="encountered an error cleaning up failed sandbox \"9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:52.935690 containerd[1439]: time="2025-07-11T00:17:52.935645967Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-796d49478c-7n5wn,Uid:6895268f-5207-4d25-89a2-65b99ac04608,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:52.935910 kubelet[2470]: E0711 00:17:52.935875 2470 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:52.935974 kubelet[2470]: E0711 00:17:52.935938 2470 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-796d49478c-7n5wn" Jul 11 00:17:52.935974 kubelet[2470]: E0711 00:17:52.935959 2470 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-796d49478c-7n5wn" Jul 11 00:17:52.936032 kubelet[2470]: E0711 00:17:52.936000 2470 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-796d49478c-7n5wn_calico-apiserver(6895268f-5207-4d25-89a2-65b99ac04608)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-796d49478c-7n5wn_calico-apiserver(6895268f-5207-4d25-89a2-65b99ac04608)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-796d49478c-7n5wn" podUID="6895268f-5207-4d25-89a2-65b99ac04608" Jul 11 00:17:52.964690 containerd[1439]: time="2025-07-11T00:17:52.964637311Z" level=error msg="Failed to destroy network for sandbox \"adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:52.965038 containerd[1439]: time="2025-07-11T00:17:52.964992380Z" level=error msg="encountered an error cleaning up failed sandbox \"adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:52.965086 containerd[1439]: time="2025-07-11T00:17:52.965057352Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-796d49478c-6j58z,Uid:fe2fa29d-40a7-4cfb-b752-279a23adcd32,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:52.965312 kubelet[2470]: E0711 00:17:52.965272 2470 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:52.965389 kubelet[2470]: E0711 00:17:52.965330 2470 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-796d49478c-6j58z" Jul 11 00:17:52.965389 kubelet[2470]: E0711 00:17:52.965354 2470 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-796d49478c-6j58z" Jul 11 00:17:52.965457 kubelet[2470]: E0711 00:17:52.965397 2470 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-796d49478c-6j58z_calico-apiserver(fe2fa29d-40a7-4cfb-b752-279a23adcd32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-796d49478c-6j58z_calico-apiserver(fe2fa29d-40a7-4cfb-b752-279a23adcd32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-796d49478c-6j58z" podUID="fe2fa29d-40a7-4cfb-b752-279a23adcd32" Jul 11 00:17:52.975636 containerd[1439]: time="2025-07-11T00:17:52.974853599Z" level=error msg="Failed to destroy network for sandbox \"3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:52.975636 containerd[1439]: time="2025-07-11T00:17:52.975195265Z" level=error msg="encountered an error cleaning up failed sandbox \"3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:52.975636 containerd[1439]: time="2025-07-11T00:17:52.975242354Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8685df7cdf-pmpnr,Uid:312327e1-547f-48cf-897a-ee24ca2c1ae6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:52.975869 kubelet[2470]: E0711 00:17:52.975457 2470 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:52.975869 kubelet[2470]: E0711 00:17:52.975522 2470 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8685df7cdf-pmpnr" Jul 11 00:17:52.975869 kubelet[2470]: E0711 00:17:52.975543 2470 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8685df7cdf-pmpnr" Jul 11 00:17:52.976035 kubelet[2470]: E0711 00:17:52.975588 2470 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-8685df7cdf-pmpnr_calico-system(312327e1-547f-48cf-897a-ee24ca2c1ae6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-8685df7cdf-pmpnr_calico-system(312327e1-547f-48cf-897a-ee24ca2c1ae6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8685df7cdf-pmpnr" podUID="312327e1-547f-48cf-897a-ee24ca2c1ae6" Jul 11 00:17:52.988547 containerd[1439]: time="2025-07-11T00:17:52.988498068Z" level=error msg="Failed to destroy network for sandbox \"197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:52.988949 containerd[1439]: time="2025-07-11T00:17:52.988909347Z" level=error msg="Failed to destroy network for sandbox \"94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:52.989104 containerd[1439]: time="2025-07-11T00:17:52.989069498Z" level=error msg="encountered an error cleaning up failed sandbox \"197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:52.989213 containerd[1439]: time="2025-07-11T00:17:52.989193001Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-8rxq7,Uid:ebfb1a06-1477-48f5-805f-9808c5339795,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:52.989579 kubelet[2470]: E0711 00:17:52.989527 2470 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:52.989659 kubelet[2470]: E0711 00:17:52.989600 2470 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-8rxq7" Jul 11 00:17:52.989659 kubelet[2470]: E0711 00:17:52.989621 2470 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-8rxq7" Jul 11 00:17:52.989715 kubelet[2470]: E0711 00:17:52.989669 2470 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-8rxq7_calico-system(ebfb1a06-1477-48f5-805f-9808c5339795)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-8rxq7_calico-system(ebfb1a06-1477-48f5-805f-9808c5339795)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-8rxq7" podUID="ebfb1a06-1477-48f5-805f-9808c5339795" Jul 11 00:17:52.989766 containerd[1439]: time="2025-07-11T00:17:52.989694938Z" level=error msg="encountered an error cleaning up failed sandbox \"94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:52.989853 containerd[1439]: time="2025-07-11T00:17:52.989812401Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tbbr8,Uid:e1f2b193-85e7-4131-903d-0d058505c956,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:52.990104 kubelet[2470]: E0711 00:17:52.990066 2470 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:52.990178 kubelet[2470]: E0711 00:17:52.990109 2470 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tbbr8" Jul 11 00:17:52.990178 kubelet[2470]: E0711 00:17:52.990129 2470 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tbbr8" Jul 11 00:17:52.990178 kubelet[2470]: E0711 00:17:52.990166 2470 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-tbbr8_kube-system(e1f2b193-85e7-4131-903d-0d058505c956)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-tbbr8_kube-system(e1f2b193-85e7-4131-903d-0d058505c956)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-tbbr8" podUID="e1f2b193-85e7-4131-903d-0d058505c956" Jul 11 00:17:52.991664 containerd[1439]: time="2025-07-11T00:17:52.991617308Z" level=error msg="Failed to destroy network for sandbox \"640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:52.992009 containerd[1439]: time="2025-07-11T00:17:52.991958614Z" level=error msg="encountered an error cleaning up failed sandbox \"640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:52.992039 containerd[1439]: time="2025-07-11T00:17:52.992023067Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cvpp7,Uid:1aa6bb20-6a01-4656-8af5-3bf6153d0dfe,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:52.992214 kubelet[2470]: E0711 00:17:52.992187 2470 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:52.992248 kubelet[2470]: E0711 00:17:52.992229 2470 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cvpp7" Jul 11 00:17:52.992271 kubelet[2470]: E0711 00:17:52.992243 2470 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cvpp7" Jul 11 00:17:52.992295 kubelet[2470]: E0711 00:17:52.992269 2470 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-cvpp7_kube-system(1aa6bb20-6a01-4656-8af5-3bf6153d0dfe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-cvpp7_kube-system(1aa6bb20-6a01-4656-8af5-3bf6153d0dfe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-cvpp7" podUID="1aa6bb20-6a01-4656-8af5-3bf6153d0dfe" Jul 11 00:17:53.599722 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32-shm.mount: Deactivated successfully. Jul 11 00:17:53.644716 containerd[1439]: time="2025-07-11T00:17:53.642038375Z" level=info msg="StopPodSandbox for \"197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547\"" Jul 11 00:17:53.644716 containerd[1439]: time="2025-07-11T00:17:53.642250655Z" level=info msg="Ensure that sandbox 197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547 in task-service has been cleanup successfully" Jul 11 00:17:53.676667 containerd[1439]: time="2025-07-11T00:17:53.676603517Z" level=error msg="StopPodSandbox for \"197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547\" failed" error="failed to destroy network for sandbox \"197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:53.687254 kubelet[2470]: E0711 00:17:53.687187 2470 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" Jul 11 00:17:53.691978 kubelet[2470]: I0711 00:17:53.691939 2470 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" Jul 11 00:17:53.692076 kubelet[2470]: I0711 00:17:53.692012 2470 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" Jul 11 00:17:53.692076 kubelet[2470]: I0711 00:17:53.692027 2470 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" Jul 11 00:17:53.692076 kubelet[2470]: I0711 00:17:53.692040 2470 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" Jul 11 00:17:53.692076 kubelet[2470]: I0711 00:17:53.692050 2470 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" Jul 11 00:17:53.692076 kubelet[2470]: I0711 00:17:53.692060 2470 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" Jul 11 00:17:53.692076 kubelet[2470]: I0711 00:17:53.692071 2470 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" Jul 11 00:17:53.692229 kubelet[2470]: I0711 00:17:53.692081 2470 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" Jul 11 00:17:53.692535 kubelet[2470]: E0711 00:17:53.692460 2470 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547"} Jul 11 00:17:53.692588 kubelet[2470]: E0711 00:17:53.692557 2470 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ebfb1a06-1477-48f5-805f-9808c5339795\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:17:53.693265 kubelet[2470]: E0711 00:17:53.692597 2470 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ebfb1a06-1477-48f5-805f-9808c5339795\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-8rxq7" podUID="ebfb1a06-1477-48f5-805f-9808c5339795" Jul 11 00:17:53.693363 containerd[1439]: time="2025-07-11T00:17:53.692841054Z" level=info msg="StopPodSandbox for \"640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d\"" Jul 11 00:17:53.693363 containerd[1439]: time="2025-07-11T00:17:53.693028249Z" level=info msg="Ensure that sandbox 640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d in task-service has been cleanup successfully" Jul 11 00:17:53.693363 containerd[1439]: time="2025-07-11T00:17:53.692841214Z" level=info msg="StopPodSandbox for \"9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb\"" Jul 11 00:17:53.693363 containerd[1439]: time="2025-07-11T00:17:53.693281176Z" level=info msg="Ensure that sandbox 9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb in task-service has been cleanup successfully" Jul 11 00:17:53.695441 containerd[1439]: time="2025-07-11T00:17:53.695033422Z" level=info msg="StopPodSandbox for \"2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32\"" Jul 11 00:17:53.695441 containerd[1439]: time="2025-07-11T00:17:53.695097034Z" level=info msg="StopPodSandbox for \"1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116\"" Jul 11 00:17:53.695441 containerd[1439]: time="2025-07-11T00:17:53.695192971Z" level=info msg="Ensure that sandbox 2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32 in task-service has been cleanup successfully" Jul 11 00:17:53.695441 containerd[1439]: time="2025-07-11T00:17:53.695241020Z" level=info msg="Ensure that sandbox 1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116 in task-service has been cleanup successfully" Jul 11 00:17:53.697004 containerd[1439]: time="2025-07-11T00:17:53.696967421Z" level=info msg="StopPodSandbox for \"94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7\"" Jul 11 00:17:53.697161 containerd[1439]: time="2025-07-11T00:17:53.697137933Z" level=info msg="Ensure that sandbox 94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7 in task-service has been cleanup successfully" Jul 11 00:17:53.697842 containerd[1439]: time="2025-07-11T00:17:53.697800456Z" level=info msg="StopPodSandbox for \"3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd\"" Jul 11 00:17:53.698007 containerd[1439]: time="2025-07-11T00:17:53.697972608Z" level=info msg="Ensure that sandbox 3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd in task-service has been cleanup successfully" Jul 11 00:17:53.698057 containerd[1439]: time="2025-07-11T00:17:53.698025058Z" level=info msg="StopPodSandbox for \"adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb\"" Jul 11 00:17:53.698195 containerd[1439]: time="2025-07-11T00:17:53.698170405Z" level=info msg="Ensure that sandbox adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb in task-service has been cleanup successfully" Jul 11 00:17:53.733250 containerd[1439]: time="2025-07-11T00:17:53.733188031Z" level=error msg="StopPodSandbox for \"9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb\" failed" error="failed to destroy network for sandbox \"9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:53.733526 kubelet[2470]: E0711 00:17:53.733478 2470 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" Jul 11 00:17:53.733603 kubelet[2470]: E0711 00:17:53.733538 2470 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb"} Jul 11 00:17:53.733603 kubelet[2470]: E0711 00:17:53.733575 2470 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6895268f-5207-4d25-89a2-65b99ac04608\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:17:53.733752 kubelet[2470]: E0711 00:17:53.733597 2470 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6895268f-5207-4d25-89a2-65b99ac04608\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-796d49478c-7n5wn" podUID="6895268f-5207-4d25-89a2-65b99ac04608" Jul 11 00:17:53.766863 containerd[1439]: time="2025-07-11T00:17:53.766806757Z" level=error msg="StopPodSandbox for \"94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7\" failed" error="failed to destroy network for sandbox \"94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:53.767157 kubelet[2470]: E0711 00:17:53.767112 2470 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" Jul 11 00:17:53.767222 kubelet[2470]: E0711 00:17:53.767167 2470 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7"} Jul 11 00:17:53.774422 kubelet[2470]: E0711 00:17:53.767201 2470 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e1f2b193-85e7-4131-903d-0d058505c956\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:17:53.775782 kubelet[2470]: E0711 00:17:53.775734 2470 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e1f2b193-85e7-4131-903d-0d058505c956\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-tbbr8" podUID="e1f2b193-85e7-4131-903d-0d058505c956" Jul 11 00:17:53.776852 containerd[1439]: time="2025-07-11T00:17:53.776801534Z" level=error msg="StopPodSandbox for \"640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d\" failed" error="failed to destroy network for sandbox \"640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:53.777065 kubelet[2470]: E0711 00:17:53.777032 2470 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" Jul 11 00:17:53.777108 kubelet[2470]: E0711 00:17:53.777078 2470 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d"} Jul 11 00:17:53.777142 kubelet[2470]: E0711 00:17:53.777111 2470 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1aa6bb20-6a01-4656-8af5-3bf6153d0dfe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:17:53.777142 kubelet[2470]: E0711 00:17:53.777131 2470 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1aa6bb20-6a01-4656-8af5-3bf6153d0dfe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-cvpp7" podUID="1aa6bb20-6a01-4656-8af5-3bf6153d0dfe" Jul 11 00:17:53.782782 containerd[1439]: time="2025-07-11T00:17:53.782724154Z" level=error msg="StopPodSandbox for \"2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32\" failed" error="failed to destroy network for sandbox \"2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:53.783308 kubelet[2470]: E0711 00:17:53.783259 2470 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" Jul 11 00:17:53.783399 kubelet[2470]: E0711 00:17:53.783315 2470 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32"} Jul 11 00:17:53.783399 kubelet[2470]: E0711 00:17:53.783347 2470 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4bb0797a-e6b9-46c0-ab2f-d796e4b11505\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:17:53.783399 kubelet[2470]: E0711 00:17:53.783369 2470 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4bb0797a-e6b9-46c0-ab2f-d796e4b11505\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-t8xxb" podUID="4bb0797a-e6b9-46c0-ab2f-d796e4b11505" Jul 11 00:17:53.788687 containerd[1439]: time="2025-07-11T00:17:53.788629612Z" level=error msg="StopPodSandbox for \"3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd\" failed" error="failed to destroy network for sandbox \"3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:53.789061 kubelet[2470]: E0711 00:17:53.789003 2470 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" Jul 11 00:17:53.789147 kubelet[2470]: E0711 00:17:53.789058 2470 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd"} Jul 11 00:17:53.789147 kubelet[2470]: E0711 00:17:53.789093 2470 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"312327e1-547f-48cf-897a-ee24ca2c1ae6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:17:53.789147 kubelet[2470]: E0711 00:17:53.789119 2470 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"312327e1-547f-48cf-897a-ee24ca2c1ae6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8685df7cdf-pmpnr" podUID="312327e1-547f-48cf-897a-ee24ca2c1ae6" Jul 11 00:17:53.790461 containerd[1439]: time="2025-07-11T00:17:53.790416624Z" level=error msg="StopPodSandbox for \"adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb\" failed" error="failed to destroy network for sandbox \"adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:53.790661 kubelet[2470]: E0711 00:17:53.790608 2470 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" Jul 11 00:17:53.790706 kubelet[2470]: E0711 00:17:53.790666 2470 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb"} Jul 11 00:17:53.790706 kubelet[2470]: E0711 00:17:53.790696 2470 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fe2fa29d-40a7-4cfb-b752-279a23adcd32\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:17:53.790780 kubelet[2470]: E0711 00:17:53.790721 2470 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fe2fa29d-40a7-4cfb-b752-279a23adcd32\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-796d49478c-6j58z" podUID="fe2fa29d-40a7-4cfb-b752-279a23adcd32" Jul 11 00:17:53.792248 containerd[1439]: time="2025-07-11T00:17:53.792214998Z" level=error msg="StopPodSandbox for \"1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116\" failed" error="failed to destroy network for sandbox \"1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:17:53.793492 kubelet[2470]: E0711 00:17:53.792493 2470 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" Jul 11 00:17:53.793492 kubelet[2470]: E0711 00:17:53.792574 2470 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116"} Jul 11 00:17:53.793492 kubelet[2470]: E0711 00:17:53.792619 2470 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9904d86a-2797-43dd-8a39-c9306c873001\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:17:53.793492 kubelet[2470]: E0711 00:17:53.792637 2470 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9904d86a-2797-43dd-8a39-c9306c873001\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7667647d94-b4l6t" podUID="9904d86a-2797-43dd-8a39-c9306c873001" Jul 11 00:17:55.863882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1328773177.mount: Deactivated successfully. Jul 11 00:17:56.182883 containerd[1439]: time="2025-07-11T00:17:56.182754223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:56.183483 containerd[1439]: time="2025-07-11T00:17:56.183352164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 11 00:17:56.184442 containerd[1439]: time="2025-07-11T00:17:56.184394658Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:56.186370 containerd[1439]: time="2025-07-11T00:17:56.186319661Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:56.186867 containerd[1439]: time="2025-07-11T00:17:56.186829587Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 3.540459264s" Jul 11 00:17:56.186905 containerd[1439]: time="2025-07-11T00:17:56.186866313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 11 00:17:56.193580 containerd[1439]: time="2025-07-11T00:17:56.193537512Z" level=info msg="CreateContainer within sandbox \"f586aa96a02c326e09191a701c30b1b7390ea2746add4e2d6ec4b95a4d76904f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 11 00:17:56.230797 containerd[1439]: time="2025-07-11T00:17:56.230746314Z" level=info msg="CreateContainer within sandbox \"f586aa96a02c326e09191a701c30b1b7390ea2746add4e2d6ec4b95a4d76904f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ad91c0331d34603095c24b8ff82646381ee0bb59f15e0fa7dbe22bc5446abcbf\"" Jul 11 00:17:56.238293 containerd[1439]: time="2025-07-11T00:17:56.238245132Z" level=info msg="StartContainer for \"ad91c0331d34603095c24b8ff82646381ee0bb59f15e0fa7dbe22bc5446abcbf\"" Jul 11 00:17:56.290564 systemd[1]: Started cri-containerd-ad91c0331d34603095c24b8ff82646381ee0bb59f15e0fa7dbe22bc5446abcbf.scope - libcontainer container ad91c0331d34603095c24b8ff82646381ee0bb59f15e0fa7dbe22bc5446abcbf. Jul 11 00:17:56.319387 containerd[1439]: time="2025-07-11T00:17:56.318275998Z" level=info msg="StartContainer for \"ad91c0331d34603095c24b8ff82646381ee0bb59f15e0fa7dbe22bc5446abcbf\" returns successfully" Jul 11 00:17:56.538574 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 11 00:17:56.538671 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 11 00:17:56.660035 containerd[1439]: time="2025-07-11T00:17:56.659625581Z" level=info msg="StopPodSandbox for \"3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd\"" Jul 11 00:17:56.687008 kubelet[2470]: I0711 00:17:56.686585 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xfh4h" podStartSLOduration=1.9203688479999999 podStartE2EDuration="11.686297815s" podCreationTimestamp="2025-07-11 00:17:45 +0000 UTC" firstStartedPulling="2025-07-11 00:17:46.421860661 +0000 UTC m=+20.981580153" lastFinishedPulling="2025-07-11 00:17:56.187789628 +0000 UTC m=+30.747509120" observedRunningTime="2025-07-11 00:17:56.684557483 +0000 UTC m=+31.244277015" watchObservedRunningTime="2025-07-11 00:17:56.686297815 +0000 UTC m=+31.246017347" Jul 11 00:17:57.017773 containerd[1439]: 2025-07-11 00:17:56.868 [INFO][3774] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" Jul 11 00:17:57.017773 containerd[1439]: 2025-07-11 00:17:56.868 [INFO][3774] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" iface="eth0" netns="/var/run/netns/cni-c0e42f20-48ed-8dcd-e89c-cd15a4676d6b" Jul 11 00:17:57.017773 containerd[1439]: 2025-07-11 00:17:56.869 [INFO][3774] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" iface="eth0" netns="/var/run/netns/cni-c0e42f20-48ed-8dcd-e89c-cd15a4676d6b" Jul 11 00:17:57.017773 containerd[1439]: 2025-07-11 00:17:56.871 [INFO][3774] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" iface="eth0" netns="/var/run/netns/cni-c0e42f20-48ed-8dcd-e89c-cd15a4676d6b" Jul 11 00:17:57.017773 containerd[1439]: 2025-07-11 00:17:56.871 [INFO][3774] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" Jul 11 00:17:57.017773 containerd[1439]: 2025-07-11 00:17:56.874 [INFO][3774] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" Jul 11 00:17:57.017773 containerd[1439]: 2025-07-11 00:17:56.996 [INFO][3785] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" HandleID="k8s-pod-network.3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" Workload="localhost-k8s-whisker--8685df7cdf--pmpnr-eth0" Jul 11 00:17:57.017773 containerd[1439]: 2025-07-11 00:17:56.996 [INFO][3785] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:17:57.017773 containerd[1439]: 2025-07-11 00:17:56.996 [INFO][3785] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:17:57.017773 containerd[1439]: 2025-07-11 00:17:57.010 [WARNING][3785] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" HandleID="k8s-pod-network.3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" Workload="localhost-k8s-whisker--8685df7cdf--pmpnr-eth0" Jul 11 00:17:57.017773 containerd[1439]: 2025-07-11 00:17:57.010 [INFO][3785] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" HandleID="k8s-pod-network.3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" Workload="localhost-k8s-whisker--8685df7cdf--pmpnr-eth0" Jul 11 00:17:57.017773 containerd[1439]: 2025-07-11 00:17:57.013 [INFO][3785] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:17:57.017773 containerd[1439]: 2025-07-11 00:17:57.015 [INFO][3774] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" Jul 11 00:17:57.019101 containerd[1439]: time="2025-07-11T00:17:57.018878235Z" level=info msg="TearDown network for sandbox \"3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd\" successfully" Jul 11 00:17:57.019101 containerd[1439]: time="2025-07-11T00:17:57.018919362Z" level=info msg="StopPodSandbox for \"3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd\" returns successfully" Jul 11 00:17:57.021394 systemd[1]: run-netns-cni\x2dc0e42f20\x2d48ed\x2d8dcd\x2de89c\x2dcd15a4676d6b.mount: Deactivated successfully. Jul 11 00:17:57.049714 kubelet[2470]: I0711 00:17:57.049401 2470 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/312327e1-547f-48cf-897a-ee24ca2c1ae6-whisker-ca-bundle\") pod \"312327e1-547f-48cf-897a-ee24ca2c1ae6\" (UID: \"312327e1-547f-48cf-897a-ee24ca2c1ae6\") " Jul 11 00:17:57.049714 kubelet[2470]: I0711 00:17:57.049586 2470 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/312327e1-547f-48cf-897a-ee24ca2c1ae6-whisker-backend-key-pair\") pod \"312327e1-547f-48cf-897a-ee24ca2c1ae6\" (UID: \"312327e1-547f-48cf-897a-ee24ca2c1ae6\") " Jul 11 00:17:57.054666 kubelet[2470]: I0711 00:17:57.054619 2470 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/312327e1-547f-48cf-897a-ee24ca2c1ae6-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "312327e1-547f-48cf-897a-ee24ca2c1ae6" (UID: "312327e1-547f-48cf-897a-ee24ca2c1ae6"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 11 00:17:57.065470 kubelet[2470]: I0711 00:17:57.065328 2470 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/312327e1-547f-48cf-897a-ee24ca2c1ae6-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "312327e1-547f-48cf-897a-ee24ca2c1ae6" (UID: "312327e1-547f-48cf-897a-ee24ca2c1ae6"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 11 00:17:57.066895 systemd[1]: var-lib-kubelet-pods-312327e1\x2d547f\x2d48cf\x2d897a\x2dee24ca2c1ae6-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 11 00:17:57.150610 kubelet[2470]: I0711 00:17:57.150544 2470 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdztp\" (UniqueName: \"kubernetes.io/projected/312327e1-547f-48cf-897a-ee24ca2c1ae6-kube-api-access-pdztp\") pod \"312327e1-547f-48cf-897a-ee24ca2c1ae6\" (UID: \"312327e1-547f-48cf-897a-ee24ca2c1ae6\") " Jul 11 00:17:57.150757 kubelet[2470]: I0711 00:17:57.150640 2470 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/312327e1-547f-48cf-897a-ee24ca2c1ae6-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 11 00:17:57.150757 kubelet[2470]: I0711 00:17:57.150652 2470 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/312327e1-547f-48cf-897a-ee24ca2c1ae6-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 11 00:17:57.154961 systemd[1]: var-lib-kubelet-pods-312327e1\x2d547f\x2d48cf\x2d897a\x2dee24ca2c1ae6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpdztp.mount: Deactivated successfully. Jul 11 00:17:57.155784 kubelet[2470]: I0711 00:17:57.155671 2470 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/312327e1-547f-48cf-897a-ee24ca2c1ae6-kube-api-access-pdztp" (OuterVolumeSpecName: "kube-api-access-pdztp") pod "312327e1-547f-48cf-897a-ee24ca2c1ae6" (UID: "312327e1-547f-48cf-897a-ee24ca2c1ae6"). InnerVolumeSpecName "kube-api-access-pdztp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 00:17:57.250895 kubelet[2470]: I0711 00:17:57.250846 2470 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pdztp\" (UniqueName: \"kubernetes.io/projected/312327e1-547f-48cf-897a-ee24ca2c1ae6-kube-api-access-pdztp\") on node \"localhost\" DevicePath \"\"" Jul 11 00:17:57.536906 systemd[1]: Removed slice kubepods-besteffort-pod312327e1_547f_48cf_897a_ee24ca2c1ae6.slice - libcontainer container kubepods-besteffort-pod312327e1_547f_48cf_897a_ee24ca2c1ae6.slice. Jul 11 00:17:57.671041 kubelet[2470]: I0711 00:17:57.671014 2470 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:17:57.748484 systemd[1]: Created slice kubepods-besteffort-podd41efc76_dc5d_4d44_811c_f59eddb15202.slice - libcontainer container kubepods-besteffort-podd41efc76_dc5d_4d44_811c_f59eddb15202.slice. Jul 11 00:17:57.754220 kubelet[2470]: I0711 00:17:57.753439 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d41efc76-dc5d-4d44-811c-f59eddb15202-whisker-ca-bundle\") pod \"whisker-69696c66f9-mkskw\" (UID: \"d41efc76-dc5d-4d44-811c-f59eddb15202\") " pod="calico-system/whisker-69696c66f9-mkskw" Jul 11 00:17:57.754220 kubelet[2470]: I0711 00:17:57.753547 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxhcl\" (UniqueName: \"kubernetes.io/projected/d41efc76-dc5d-4d44-811c-f59eddb15202-kube-api-access-xxhcl\") pod \"whisker-69696c66f9-mkskw\" (UID: \"d41efc76-dc5d-4d44-811c-f59eddb15202\") " pod="calico-system/whisker-69696c66f9-mkskw" Jul 11 00:17:57.754220 kubelet[2470]: I0711 00:17:57.753606 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d41efc76-dc5d-4d44-811c-f59eddb15202-whisker-backend-key-pair\") pod \"whisker-69696c66f9-mkskw\" (UID: \"d41efc76-dc5d-4d44-811c-f59eddb15202\") " pod="calico-system/whisker-69696c66f9-mkskw" Jul 11 00:17:58.052141 containerd[1439]: time="2025-07-11T00:17:58.052098892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69696c66f9-mkskw,Uid:d41efc76-dc5d-4d44-811c-f59eddb15202,Namespace:calico-system,Attempt:0,}" Jul 11 00:17:58.265155 systemd-networkd[1387]: cali82e84a56439: Link UP Jul 11 00:17:58.265853 systemd-networkd[1387]: cali82e84a56439: Gained carrier Jul 11 00:17:58.295526 containerd[1439]: 2025-07-11 00:17:58.107 [INFO][3809] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 00:17:58.295526 containerd[1439]: 2025-07-11 00:17:58.142 [INFO][3809] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--69696c66f9--mkskw-eth0 whisker-69696c66f9- calico-system d41efc76-dc5d-4d44-811c-f59eddb15202 932 0 2025-07-11 00:17:57 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:69696c66f9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-69696c66f9-mkskw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali82e84a56439 [] [] }} ContainerID="1812c37ca8e49bfb17396533adbc00b5e873a655822d9541f39fcae7c1ed3517" Namespace="calico-system" Pod="whisker-69696c66f9-mkskw" WorkloadEndpoint="localhost-k8s-whisker--69696c66f9--mkskw-" Jul 11 00:17:58.295526 containerd[1439]: 2025-07-11 00:17:58.142 [INFO][3809] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1812c37ca8e49bfb17396533adbc00b5e873a655822d9541f39fcae7c1ed3517" Namespace="calico-system" Pod="whisker-69696c66f9-mkskw" WorkloadEndpoint="localhost-k8s-whisker--69696c66f9--mkskw-eth0" Jul 11 00:17:58.295526 containerd[1439]: 2025-07-11 00:17:58.185 [INFO][3907] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1812c37ca8e49bfb17396533adbc00b5e873a655822d9541f39fcae7c1ed3517" HandleID="k8s-pod-network.1812c37ca8e49bfb17396533adbc00b5e873a655822d9541f39fcae7c1ed3517" Workload="localhost-k8s-whisker--69696c66f9--mkskw-eth0" Jul 11 00:17:58.295526 containerd[1439]: 2025-07-11 00:17:58.185 [INFO][3907] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1812c37ca8e49bfb17396533adbc00b5e873a655822d9541f39fcae7c1ed3517" HandleID="k8s-pod-network.1812c37ca8e49bfb17396533adbc00b5e873a655822d9541f39fcae7c1ed3517" Workload="localhost-k8s-whisker--69696c66f9--mkskw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004da70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-69696c66f9-mkskw", "timestamp":"2025-07-11 00:17:58.185342001 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:17:58.295526 containerd[1439]: 2025-07-11 00:17:58.185 [INFO][3907] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:17:58.295526 containerd[1439]: 2025-07-11 00:17:58.185 [INFO][3907] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:17:58.295526 containerd[1439]: 2025-07-11 00:17:58.185 [INFO][3907] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:17:58.295526 containerd[1439]: 2025-07-11 00:17:58.205 [INFO][3907] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1812c37ca8e49bfb17396533adbc00b5e873a655822d9541f39fcae7c1ed3517" host="localhost" Jul 11 00:17:58.295526 containerd[1439]: 2025-07-11 00:17:58.224 [INFO][3907] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:17:58.295526 containerd[1439]: 2025-07-11 00:17:58.229 [INFO][3907] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:17:58.295526 containerd[1439]: 2025-07-11 00:17:58.231 [INFO][3907] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:17:58.295526 containerd[1439]: 2025-07-11 00:17:58.234 [INFO][3907] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:17:58.295526 containerd[1439]: 2025-07-11 00:17:58.235 [INFO][3907] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1812c37ca8e49bfb17396533adbc00b5e873a655822d9541f39fcae7c1ed3517" host="localhost" Jul 11 00:17:58.295526 containerd[1439]: 2025-07-11 00:17:58.236 [INFO][3907] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1812c37ca8e49bfb17396533adbc00b5e873a655822d9541f39fcae7c1ed3517 Jul 11 00:17:58.295526 containerd[1439]: 2025-07-11 00:17:58.241 [INFO][3907] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1812c37ca8e49bfb17396533adbc00b5e873a655822d9541f39fcae7c1ed3517" host="localhost" Jul 11 00:17:58.295526 containerd[1439]: 2025-07-11 00:17:58.246 [INFO][3907] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.1812c37ca8e49bfb17396533adbc00b5e873a655822d9541f39fcae7c1ed3517" host="localhost" Jul 11 00:17:58.295526 containerd[1439]: 2025-07-11 00:17:58.246 [INFO][3907] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.1812c37ca8e49bfb17396533adbc00b5e873a655822d9541f39fcae7c1ed3517" host="localhost" Jul 11 00:17:58.295526 containerd[1439]: 2025-07-11 00:17:58.246 [INFO][3907] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:17:58.295526 containerd[1439]: 2025-07-11 00:17:58.246 [INFO][3907] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="1812c37ca8e49bfb17396533adbc00b5e873a655822d9541f39fcae7c1ed3517" HandleID="k8s-pod-network.1812c37ca8e49bfb17396533adbc00b5e873a655822d9541f39fcae7c1ed3517" Workload="localhost-k8s-whisker--69696c66f9--mkskw-eth0" Jul 11 00:17:58.296184 containerd[1439]: 2025-07-11 00:17:58.253 [INFO][3809] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1812c37ca8e49bfb17396533adbc00b5e873a655822d9541f39fcae7c1ed3517" Namespace="calico-system" Pod="whisker-69696c66f9-mkskw" WorkloadEndpoint="localhost-k8s-whisker--69696c66f9--mkskw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--69696c66f9--mkskw-eth0", GenerateName:"whisker-69696c66f9-", Namespace:"calico-system", SelfLink:"", UID:"d41efc76-dc5d-4d44-811c-f59eddb15202", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 17, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69696c66f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-69696c66f9-mkskw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali82e84a56439", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:17:58.296184 containerd[1439]: 2025-07-11 00:17:58.253 [INFO][3809] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="1812c37ca8e49bfb17396533adbc00b5e873a655822d9541f39fcae7c1ed3517" Namespace="calico-system" Pod="whisker-69696c66f9-mkskw" WorkloadEndpoint="localhost-k8s-whisker--69696c66f9--mkskw-eth0" Jul 11 00:17:58.296184 containerd[1439]: 2025-07-11 00:17:58.253 [INFO][3809] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali82e84a56439 ContainerID="1812c37ca8e49bfb17396533adbc00b5e873a655822d9541f39fcae7c1ed3517" Namespace="calico-system" Pod="whisker-69696c66f9-mkskw" WorkloadEndpoint="localhost-k8s-whisker--69696c66f9--mkskw-eth0" Jul 11 00:17:58.296184 containerd[1439]: 2025-07-11 00:17:58.266 [INFO][3809] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1812c37ca8e49bfb17396533adbc00b5e873a655822d9541f39fcae7c1ed3517" Namespace="calico-system" Pod="whisker-69696c66f9-mkskw" WorkloadEndpoint="localhost-k8s-whisker--69696c66f9--mkskw-eth0" Jul 11 00:17:58.296184 containerd[1439]: 2025-07-11 00:17:58.267 [INFO][3809] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1812c37ca8e49bfb17396533adbc00b5e873a655822d9541f39fcae7c1ed3517" Namespace="calico-system" Pod="whisker-69696c66f9-mkskw" WorkloadEndpoint="localhost-k8s-whisker--69696c66f9--mkskw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--69696c66f9--mkskw-eth0", GenerateName:"whisker-69696c66f9-", Namespace:"calico-system", SelfLink:"", UID:"d41efc76-dc5d-4d44-811c-f59eddb15202", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 17, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"69696c66f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1812c37ca8e49bfb17396533adbc00b5e873a655822d9541f39fcae7c1ed3517", Pod:"whisker-69696c66f9-mkskw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali82e84a56439", MAC:"b6:8f:65:c2:19:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:17:58.296184 containerd[1439]: 2025-07-11 00:17:58.288 [INFO][3809] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1812c37ca8e49bfb17396533adbc00b5e873a655822d9541f39fcae7c1ed3517" Namespace="calico-system" Pod="whisker-69696c66f9-mkskw" WorkloadEndpoint="localhost-k8s-whisker--69696c66f9--mkskw-eth0" Jul 11 00:17:58.340935 containerd[1439]: time="2025-07-11T00:17:58.340730679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:17:58.340935 containerd[1439]: time="2025-07-11T00:17:58.340809172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:17:58.340935 containerd[1439]: time="2025-07-11T00:17:58.340898546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:17:58.343259 containerd[1439]: time="2025-07-11T00:17:58.341284687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:17:58.381673 systemd[1]: Started cri-containerd-1812c37ca8e49bfb17396533adbc00b5e873a655822d9541f39fcae7c1ed3517.scope - libcontainer container 1812c37ca8e49bfb17396533adbc00b5e873a655822d9541f39fcae7c1ed3517. Jul 11 00:17:58.420250 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:17:58.453921 containerd[1439]: time="2025-07-11T00:17:58.453863021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69696c66f9-mkskw,Uid:d41efc76-dc5d-4d44-811c-f59eddb15202,Namespace:calico-system,Attempt:0,} returns sandbox id \"1812c37ca8e49bfb17396533adbc00b5e873a655822d9541f39fcae7c1ed3517\"" Jul 11 00:17:58.455648 containerd[1439]: time="2025-07-11T00:17:58.455500039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 11 00:17:58.462455 kernel: bpftool[4009]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 11 00:17:58.668808 systemd-networkd[1387]: vxlan.calico: Link UP Jul 11 00:17:58.668815 systemd-networkd[1387]: vxlan.calico: Gained carrier Jul 11 00:17:59.351401 containerd[1439]: time="2025-07-11T00:17:59.351350173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:59.352643 containerd[1439]: time="2025-07-11T00:17:59.352594524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 11 00:17:59.354013 containerd[1439]: time="2025-07-11T00:17:59.353957532Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:59.356961 containerd[1439]: time="2025-07-11T00:17:59.356810848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:17:59.358281 containerd[1439]: time="2025-07-11T00:17:59.357744551Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 902.206427ms" Jul 11 00:17:59.358281 containerd[1439]: time="2025-07-11T00:17:59.357781277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 11 00:17:59.360478 containerd[1439]: time="2025-07-11T00:17:59.360453165Z" level=info msg="CreateContainer within sandbox \"1812c37ca8e49bfb17396533adbc00b5e873a655822d9541f39fcae7c1ed3517\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 11 00:17:59.373519 containerd[1439]: time="2025-07-11T00:17:59.373469075Z" level=info msg="CreateContainer within sandbox \"1812c37ca8e49bfb17396533adbc00b5e873a655822d9541f39fcae7c1ed3517\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"71179eb6aab41804f033ace656cc12509ba30cf9e02f69de1fe364645ab7e42d\"" Jul 11 00:17:59.375161 containerd[1439]: time="2025-07-11T00:17:59.374127816Z" level=info msg="StartContainer for \"71179eb6aab41804f033ace656cc12509ba30cf9e02f69de1fe364645ab7e42d\"" Jul 11 00:17:59.411615 systemd[1]: Started cri-containerd-71179eb6aab41804f033ace656cc12509ba30cf9e02f69de1fe364645ab7e42d.scope - libcontainer container 71179eb6aab41804f033ace656cc12509ba30cf9e02f69de1fe364645ab7e42d. Jul 11 00:17:59.441429 containerd[1439]: time="2025-07-11T00:17:59.441368016Z" level=info msg="StartContainer for \"71179eb6aab41804f033ace656cc12509ba30cf9e02f69de1fe364645ab7e42d\" returns successfully" Jul 11 00:17:59.445975 containerd[1439]: time="2025-07-11T00:17:59.445309339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 11 00:17:59.500778 systemd-networkd[1387]: cali82e84a56439: Gained IPv6LL Jul 11 00:17:59.530920 kubelet[2470]: I0711 00:17:59.530735 2470 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="312327e1-547f-48cf-897a-ee24ca2c1ae6" path="/var/lib/kubelet/pods/312327e1-547f-48cf-897a-ee24ca2c1ae6/volumes" Jul 11 00:18:00.652610 systemd-networkd[1387]: vxlan.calico: Gained IPv6LL Jul 11 00:18:00.786696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3781225442.mount: Deactivated successfully. Jul 11 00:18:00.805007 containerd[1439]: time="2025-07-11T00:18:00.804957673Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:18:00.805530 containerd[1439]: time="2025-07-11T00:18:00.805491472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 11 00:18:00.806151 containerd[1439]: time="2025-07-11T00:18:00.806099162Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:18:00.808387 containerd[1439]: time="2025-07-11T00:18:00.808352897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:18:00.809922 containerd[1439]: time="2025-07-11T00:18:00.809304678Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.363949453s" Jul 11 00:18:00.809922 containerd[1439]: time="2025-07-11T00:18:00.809340724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 11 00:18:00.811556 containerd[1439]: time="2025-07-11T00:18:00.811529849Z" level=info msg="CreateContainer within sandbox \"1812c37ca8e49bfb17396533adbc00b5e873a655822d9541f39fcae7c1ed3517\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 11 00:18:00.820824 containerd[1439]: time="2025-07-11T00:18:00.820779623Z" level=info msg="CreateContainer within sandbox \"1812c37ca8e49bfb17396533adbc00b5e873a655822d9541f39fcae7c1ed3517\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"f158335f4fa5a0de3c527471b0053841c60a85cf88a5fc5907b227cb6b69d44b\"" Jul 11 00:18:00.821612 containerd[1439]: time="2025-07-11T00:18:00.821590703Z" level=info msg="StartContainer for \"f158335f4fa5a0de3c527471b0053841c60a85cf88a5fc5907b227cb6b69d44b\"" Jul 11 00:18:00.867698 systemd[1]: Started cri-containerd-f158335f4fa5a0de3c527471b0053841c60a85cf88a5fc5907b227cb6b69d44b.scope - libcontainer container f158335f4fa5a0de3c527471b0053841c60a85cf88a5fc5907b227cb6b69d44b. Jul 11 00:18:00.898014 containerd[1439]: time="2025-07-11T00:18:00.897967488Z" level=info msg="StartContainer for \"f158335f4fa5a0de3c527471b0053841c60a85cf88a5fc5907b227cb6b69d44b\" returns successfully" Jul 11 00:18:01.696511 kubelet[2470]: I0711 00:18:01.696398 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-69696c66f9-mkskw" podStartSLOduration=2.341288416 podStartE2EDuration="4.696382329s" podCreationTimestamp="2025-07-11 00:17:57 +0000 UTC" firstStartedPulling="2025-07-11 00:17:58.455232397 +0000 UTC m=+33.014951929" lastFinishedPulling="2025-07-11 00:18:00.81032631 +0000 UTC m=+35.370045842" observedRunningTime="2025-07-11 00:18:01.696198823 +0000 UTC m=+36.255918395" watchObservedRunningTime="2025-07-11 00:18:01.696382329 +0000 UTC m=+36.256101821" Jul 11 00:18:04.529425 containerd[1439]: time="2025-07-11T00:18:04.529362829Z" level=info msg="StopPodSandbox for \"2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32\"" Jul 11 00:18:04.626815 containerd[1439]: 2025-07-11 00:18:04.584 [INFO][4200] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" Jul 11 00:18:04.626815 containerd[1439]: 2025-07-11 00:18:04.584 [INFO][4200] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" iface="eth0" netns="/var/run/netns/cni-53b6d6b3-27f9-bcab-559f-d82840d4a2fa" Jul 11 00:18:04.626815 containerd[1439]: 2025-07-11 00:18:04.585 [INFO][4200] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" iface="eth0" netns="/var/run/netns/cni-53b6d6b3-27f9-bcab-559f-d82840d4a2fa" Jul 11 00:18:04.626815 containerd[1439]: 2025-07-11 00:18:04.586 [INFO][4200] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" iface="eth0" netns="/var/run/netns/cni-53b6d6b3-27f9-bcab-559f-d82840d4a2fa" Jul 11 00:18:04.626815 containerd[1439]: 2025-07-11 00:18:04.586 [INFO][4200] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" Jul 11 00:18:04.626815 containerd[1439]: 2025-07-11 00:18:04.586 [INFO][4200] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" Jul 11 00:18:04.626815 containerd[1439]: 2025-07-11 00:18:04.610 [INFO][4209] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" HandleID="k8s-pod-network.2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" Workload="localhost-k8s-csi--node--driver--t8xxb-eth0" Jul 11 00:18:04.626815 containerd[1439]: 2025-07-11 00:18:04.611 [INFO][4209] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:18:04.626815 containerd[1439]: 2025-07-11 00:18:04.611 [INFO][4209] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:18:04.626815 containerd[1439]: 2025-07-11 00:18:04.621 [WARNING][4209] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" HandleID="k8s-pod-network.2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" Workload="localhost-k8s-csi--node--driver--t8xxb-eth0" Jul 11 00:18:04.626815 containerd[1439]: 2025-07-11 00:18:04.621 [INFO][4209] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" HandleID="k8s-pod-network.2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" Workload="localhost-k8s-csi--node--driver--t8xxb-eth0" Jul 11 00:18:04.626815 containerd[1439]: 2025-07-11 00:18:04.623 [INFO][4209] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:18:04.626815 containerd[1439]: 2025-07-11 00:18:04.625 [INFO][4200] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" Jul 11 00:18:04.627289 containerd[1439]: time="2025-07-11T00:18:04.626965238Z" level=info msg="TearDown network for sandbox \"2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32\" successfully" Jul 11 00:18:04.627289 containerd[1439]: time="2025-07-11T00:18:04.626997442Z" level=info msg="StopPodSandbox for \"2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32\" returns successfully" Jul 11 00:18:04.630842 containerd[1439]: time="2025-07-11T00:18:04.627941689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t8xxb,Uid:4bb0797a-e6b9-46c0-ab2f-d796e4b11505,Namespace:calico-system,Attempt:1,}" Jul 11 00:18:04.628959 systemd[1]: run-netns-cni\x2d53b6d6b3\x2d27f9\x2dbcab\x2d559f\x2dd82840d4a2fa.mount: Deactivated successfully. Jul 11 00:18:04.737532 systemd-networkd[1387]: cali727c42c0a08: Link UP Jul 11 00:18:04.738087 systemd-networkd[1387]: cali727c42c0a08: Gained carrier Jul 11 00:18:04.751467 containerd[1439]: 2025-07-11 00:18:04.671 [INFO][4218] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--t8xxb-eth0 csi-node-driver- calico-system 4bb0797a-e6b9-46c0-ab2f-d796e4b11505 965 0 2025-07-11 00:17:46 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-t8xxb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali727c42c0a08 [] [] }} ContainerID="3cc76db5c7da35c9c0b117a73461563430e488923944146c3dd7ea0de404f2f1" Namespace="calico-system" Pod="csi-node-driver-t8xxb" WorkloadEndpoint="localhost-k8s-csi--node--driver--t8xxb-" Jul 11 00:18:04.751467 containerd[1439]: 2025-07-11 00:18:04.671 [INFO][4218] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3cc76db5c7da35c9c0b117a73461563430e488923944146c3dd7ea0de404f2f1" Namespace="calico-system" Pod="csi-node-driver-t8xxb" WorkloadEndpoint="localhost-k8s-csi--node--driver--t8xxb-eth0" Jul 11 00:18:04.751467 containerd[1439]: 2025-07-11 00:18:04.698 [INFO][4232] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3cc76db5c7da35c9c0b117a73461563430e488923944146c3dd7ea0de404f2f1" HandleID="k8s-pod-network.3cc76db5c7da35c9c0b117a73461563430e488923944146c3dd7ea0de404f2f1" Workload="localhost-k8s-csi--node--driver--t8xxb-eth0" Jul 11 00:18:04.751467 containerd[1439]: 2025-07-11 00:18:04.698 [INFO][4232] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3cc76db5c7da35c9c0b117a73461563430e488923944146c3dd7ea0de404f2f1" HandleID="k8s-pod-network.3cc76db5c7da35c9c0b117a73461563430e488923944146c3dd7ea0de404f2f1" Workload="localhost-k8s-csi--node--driver--t8xxb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323480), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-t8xxb", "timestamp":"2025-07-11 00:18:04.698049261 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:18:04.751467 containerd[1439]: 2025-07-11 00:18:04.698 [INFO][4232] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:18:04.751467 containerd[1439]: 2025-07-11 00:18:04.698 [INFO][4232] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:18:04.751467 containerd[1439]: 2025-07-11 00:18:04.698 [INFO][4232] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:18:04.751467 containerd[1439]: 2025-07-11 00:18:04.707 [INFO][4232] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3cc76db5c7da35c9c0b117a73461563430e488923944146c3dd7ea0de404f2f1" host="localhost" Jul 11 00:18:04.751467 containerd[1439]: 2025-07-11 00:18:04.714 [INFO][4232] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:18:04.751467 containerd[1439]: 2025-07-11 00:18:04.718 [INFO][4232] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:18:04.751467 containerd[1439]: 2025-07-11 00:18:04.719 [INFO][4232] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:18:04.751467 containerd[1439]: 2025-07-11 00:18:04.721 [INFO][4232] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:18:04.751467 containerd[1439]: 2025-07-11 00:18:04.721 [INFO][4232] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3cc76db5c7da35c9c0b117a73461563430e488923944146c3dd7ea0de404f2f1" host="localhost" Jul 11 00:18:04.751467 containerd[1439]: 2025-07-11 00:18:04.723 [INFO][4232] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3cc76db5c7da35c9c0b117a73461563430e488923944146c3dd7ea0de404f2f1 Jul 11 00:18:04.751467 containerd[1439]: 2025-07-11 00:18:04.726 [INFO][4232] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3cc76db5c7da35c9c0b117a73461563430e488923944146c3dd7ea0de404f2f1" host="localhost" Jul 11 00:18:04.751467 containerd[1439]: 2025-07-11 00:18:04.731 [INFO][4232] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3cc76db5c7da35c9c0b117a73461563430e488923944146c3dd7ea0de404f2f1" host="localhost" Jul 11 00:18:04.751467 containerd[1439]: 2025-07-11 00:18:04.731 [INFO][4232] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3cc76db5c7da35c9c0b117a73461563430e488923944146c3dd7ea0de404f2f1" host="localhost" Jul 11 00:18:04.751467 containerd[1439]: 2025-07-11 00:18:04.731 [INFO][4232] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:18:04.751467 containerd[1439]: 2025-07-11 00:18:04.731 [INFO][4232] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3cc76db5c7da35c9c0b117a73461563430e488923944146c3dd7ea0de404f2f1" HandleID="k8s-pod-network.3cc76db5c7da35c9c0b117a73461563430e488923944146c3dd7ea0de404f2f1" Workload="localhost-k8s-csi--node--driver--t8xxb-eth0" Jul 11 00:18:04.752018 containerd[1439]: 2025-07-11 00:18:04.733 [INFO][4218] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3cc76db5c7da35c9c0b117a73461563430e488923944146c3dd7ea0de404f2f1" Namespace="calico-system" Pod="csi-node-driver-t8xxb" WorkloadEndpoint="localhost-k8s-csi--node--driver--t8xxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t8xxb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4bb0797a-e6b9-46c0-ab2f-d796e4b11505", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 17, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-t8xxb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali727c42c0a08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:18:04.752018 containerd[1439]: 2025-07-11 00:18:04.733 [INFO][4218] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3cc76db5c7da35c9c0b117a73461563430e488923944146c3dd7ea0de404f2f1" Namespace="calico-system" Pod="csi-node-driver-t8xxb" WorkloadEndpoint="localhost-k8s-csi--node--driver--t8xxb-eth0" Jul 11 00:18:04.752018 containerd[1439]: 2025-07-11 00:18:04.733 [INFO][4218] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali727c42c0a08 ContainerID="3cc76db5c7da35c9c0b117a73461563430e488923944146c3dd7ea0de404f2f1" Namespace="calico-system" Pod="csi-node-driver-t8xxb" WorkloadEndpoint="localhost-k8s-csi--node--driver--t8xxb-eth0" Jul 11 00:18:04.752018 containerd[1439]: 2025-07-11 00:18:04.738 [INFO][4218] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3cc76db5c7da35c9c0b117a73461563430e488923944146c3dd7ea0de404f2f1" Namespace="calico-system" Pod="csi-node-driver-t8xxb" WorkloadEndpoint="localhost-k8s-csi--node--driver--t8xxb-eth0" Jul 11 00:18:04.752018 containerd[1439]: 2025-07-11 00:18:04.738 [INFO][4218] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3cc76db5c7da35c9c0b117a73461563430e488923944146c3dd7ea0de404f2f1" Namespace="calico-system" Pod="csi-node-driver-t8xxb" WorkloadEndpoint="localhost-k8s-csi--node--driver--t8xxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t8xxb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4bb0797a-e6b9-46c0-ab2f-d796e4b11505", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 17, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3cc76db5c7da35c9c0b117a73461563430e488923944146c3dd7ea0de404f2f1", Pod:"csi-node-driver-t8xxb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali727c42c0a08", MAC:"6a:9b:12:d6:37:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:18:04.752018 containerd[1439]: 2025-07-11 00:18:04.748 [INFO][4218] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3cc76db5c7da35c9c0b117a73461563430e488923944146c3dd7ea0de404f2f1" Namespace="calico-system" Pod="csi-node-driver-t8xxb" WorkloadEndpoint="localhost-k8s-csi--node--driver--t8xxb-eth0" Jul 11 00:18:04.769212 containerd[1439]: time="2025-07-11T00:18:04.769031271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:18:04.769212 containerd[1439]: time="2025-07-11T00:18:04.769101761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:18:04.769212 containerd[1439]: time="2025-07-11T00:18:04.769113762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:18:04.770144 containerd[1439]: time="2025-07-11T00:18:04.769194893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:18:04.795574 systemd[1]: Started cri-containerd-3cc76db5c7da35c9c0b117a73461563430e488923944146c3dd7ea0de404f2f1.scope - libcontainer container 3cc76db5c7da35c9c0b117a73461563430e488923944146c3dd7ea0de404f2f1. Jul 11 00:18:04.805347 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:18:04.814991 containerd[1439]: time="2025-07-11T00:18:04.814946850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t8xxb,Uid:4bb0797a-e6b9-46c0-ab2f-d796e4b11505,Namespace:calico-system,Attempt:1,} returns sandbox id \"3cc76db5c7da35c9c0b117a73461563430e488923944146c3dd7ea0de404f2f1\"" Jul 11 00:18:04.816903 containerd[1439]: time="2025-07-11T00:18:04.816636556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 11 00:18:05.629164 systemd[1]: run-containerd-runc-k8s.io-3cc76db5c7da35c9c0b117a73461563430e488923944146c3dd7ea0de404f2f1-runc.jMbnrD.mount: Deactivated successfully. Jul 11 00:18:05.908532 containerd[1439]: time="2025-07-11T00:18:05.907894031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:18:05.908889 containerd[1439]: time="2025-07-11T00:18:05.908844035Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 11 00:18:05.909411 containerd[1439]: time="2025-07-11T00:18:05.909367583Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:18:05.911650 containerd[1439]: time="2025-07-11T00:18:05.911593514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:18:05.912478 containerd[1439]: time="2025-07-11T00:18:05.912436384Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.095763664s" Jul 11 00:18:05.912478 containerd[1439]: time="2025-07-11T00:18:05.912477189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 11 00:18:05.915094 containerd[1439]: time="2025-07-11T00:18:05.914962953Z" level=info msg="CreateContainer within sandbox \"3cc76db5c7da35c9c0b117a73461563430e488923944146c3dd7ea0de404f2f1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 11 00:18:05.927975 containerd[1439]: time="2025-07-11T00:18:05.927921845Z" level=info msg="CreateContainer within sandbox \"3cc76db5c7da35c9c0b117a73461563430e488923944146c3dd7ea0de404f2f1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"06556e42c5ce8e2a3a12b3f7ef169c245226ae3f937d6f5b9cea20514509bc99\"" Jul 11 00:18:05.928682 containerd[1439]: time="2025-07-11T00:18:05.928581011Z" level=info msg="StartContainer for \"06556e42c5ce8e2a3a12b3f7ef169c245226ae3f937d6f5b9cea20514509bc99\"" Jul 11 00:18:05.966592 systemd[1]: Started cri-containerd-06556e42c5ce8e2a3a12b3f7ef169c245226ae3f937d6f5b9cea20514509bc99.scope - libcontainer container 06556e42c5ce8e2a3a12b3f7ef169c245226ae3f937d6f5b9cea20514509bc99. Jul 11 00:18:05.991582 containerd[1439]: time="2025-07-11T00:18:05.991466020Z" level=info msg="StartContainer for \"06556e42c5ce8e2a3a12b3f7ef169c245226ae3f937d6f5b9cea20514509bc99\" returns successfully" Jul 11 00:18:05.992961 containerd[1439]: time="2025-07-11T00:18:05.992929652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 11 00:18:06.604654 systemd-networkd[1387]: cali727c42c0a08: Gained IPv6LL Jul 11 00:18:07.114016 kubelet[2470]: I0711 00:18:07.113976 2470 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:18:07.277742 systemd[1]: Started sshd@7-10.0.0.77:22-10.0.0.1:38282.service - OpenSSH per-connection server daemon (10.0.0.1:38282). Jul 11 00:18:07.372901 sshd[4358]: Accepted publickey for core from 10.0.0.1 port 38282 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:18:07.376381 sshd[4358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:07.383606 systemd-logind[1419]: New session 8 of user core. Jul 11 00:18:07.392003 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 11 00:18:07.497533 containerd[1439]: time="2025-07-11T00:18:07.497475782Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 11 00:18:07.497910 containerd[1439]: time="2025-07-11T00:18:07.497527829Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:18:07.498736 containerd[1439]: time="2025-07-11T00:18:07.498694614Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:18:07.500828 containerd[1439]: time="2025-07-11T00:18:07.500791636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:18:07.502420 containerd[1439]: time="2025-07-11T00:18:07.501596216Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.50863052s" Jul 11 00:18:07.502420 containerd[1439]: time="2025-07-11T00:18:07.501646423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 11 00:18:07.504308 containerd[1439]: time="2025-07-11T00:18:07.504282192Z" level=info msg="CreateContainer within sandbox \"3cc76db5c7da35c9c0b117a73461563430e488923944146c3dd7ea0de404f2f1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 11 00:18:07.516213 containerd[1439]: time="2025-07-11T00:18:07.516162555Z" level=info msg="CreateContainer within sandbox \"3cc76db5c7da35c9c0b117a73461563430e488923944146c3dd7ea0de404f2f1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"53a0d36a1d4b8c07e90a5967dca23f9942c381c21f255c96f2b19fd0a13aa0ca\"" Jul 11 00:18:07.518614 containerd[1439]: time="2025-07-11T00:18:07.516735146Z" level=info msg="StartContainer for \"53a0d36a1d4b8c07e90a5967dca23f9942c381c21f255c96f2b19fd0a13aa0ca\"" Jul 11 00:18:07.529976 containerd[1439]: time="2025-07-11T00:18:07.529904150Z" level=info msg="StopPodSandbox for \"1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116\"" Jul 11 00:18:07.530636 containerd[1439]: time="2025-07-11T00:18:07.530596076Z" level=info msg="StopPodSandbox for \"9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb\"" Jul 11 00:18:07.564600 systemd[1]: Started cri-containerd-53a0d36a1d4b8c07e90a5967dca23f9942c381c21f255c96f2b19fd0a13aa0ca.scope - libcontainer container 53a0d36a1d4b8c07e90a5967dca23f9942c381c21f255c96f2b19fd0a13aa0ca. Jul 11 00:18:07.635270 containerd[1439]: time="2025-07-11T00:18:07.633767354Z" level=info msg="StartContainer for \"53a0d36a1d4b8c07e90a5967dca23f9942c381c21f255c96f2b19fd0a13aa0ca\" returns successfully" Jul 11 00:18:07.684894 containerd[1439]: 2025-07-11 00:18:07.620 [INFO][4431] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" Jul 11 00:18:07.684894 containerd[1439]: 2025-07-11 00:18:07.620 [INFO][4431] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" iface="eth0" netns="/var/run/netns/cni-35ee3fd6-287d-f8ea-240b-407e7279b30a" Jul 11 00:18:07.684894 containerd[1439]: 2025-07-11 00:18:07.620 [INFO][4431] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" iface="eth0" netns="/var/run/netns/cni-35ee3fd6-287d-f8ea-240b-407e7279b30a" Jul 11 00:18:07.684894 containerd[1439]: 2025-07-11 00:18:07.620 [INFO][4431] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" iface="eth0" netns="/var/run/netns/cni-35ee3fd6-287d-f8ea-240b-407e7279b30a" Jul 11 00:18:07.684894 containerd[1439]: 2025-07-11 00:18:07.620 [INFO][4431] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" Jul 11 00:18:07.684894 containerd[1439]: 2025-07-11 00:18:07.620 [INFO][4431] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" Jul 11 00:18:07.684894 containerd[1439]: 2025-07-11 00:18:07.670 [INFO][4473] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" HandleID="k8s-pod-network.1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" Workload="localhost-k8s-calico--kube--controllers--7667647d94--b4l6t-eth0" Jul 11 00:18:07.684894 containerd[1439]: 2025-07-11 00:18:07.670 [INFO][4473] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:18:07.684894 containerd[1439]: 2025-07-11 00:18:07.670 [INFO][4473] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:18:07.684894 containerd[1439]: 2025-07-11 00:18:07.678 [WARNING][4473] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" HandleID="k8s-pod-network.1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" Workload="localhost-k8s-calico--kube--controllers--7667647d94--b4l6t-eth0" Jul 11 00:18:07.684894 containerd[1439]: 2025-07-11 00:18:07.678 [INFO][4473] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" HandleID="k8s-pod-network.1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" Workload="localhost-k8s-calico--kube--controllers--7667647d94--b4l6t-eth0" Jul 11 00:18:07.684894 containerd[1439]: 2025-07-11 00:18:07.680 [INFO][4473] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:18:07.684894 containerd[1439]: 2025-07-11 00:18:07.681 [INFO][4431] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" Jul 11 00:18:07.687289 containerd[1439]: time="2025-07-11T00:18:07.686833378Z" level=info msg="TearDown network for sandbox \"1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116\" successfully" Jul 11 00:18:07.687289 containerd[1439]: time="2025-07-11T00:18:07.687289235Z" level=info msg="StopPodSandbox for \"1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116\" returns successfully" Jul 11 00:18:07.688880 containerd[1439]: time="2025-07-11T00:18:07.688848389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7667647d94-b4l6t,Uid:9904d86a-2797-43dd-8a39-c9306c873001,Namespace:calico-system,Attempt:1,}" Jul 11 00:18:07.701154 containerd[1439]: 2025-07-11 00:18:07.643 [INFO][4443] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" Jul 11 00:18:07.701154 containerd[1439]: 2025-07-11 00:18:07.643 [INFO][4443] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" iface="eth0" netns="/var/run/netns/cni-dbc9f450-432d-e0aa-24f8-61846e90f092" Jul 11 00:18:07.701154 containerd[1439]: 2025-07-11 00:18:07.643 [INFO][4443] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" iface="eth0" netns="/var/run/netns/cni-dbc9f450-432d-e0aa-24f8-61846e90f092" Jul 11 00:18:07.701154 containerd[1439]: 2025-07-11 00:18:07.643 [INFO][4443] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" iface="eth0" netns="/var/run/netns/cni-dbc9f450-432d-e0aa-24f8-61846e90f092" Jul 11 00:18:07.701154 containerd[1439]: 2025-07-11 00:18:07.643 [INFO][4443] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" Jul 11 00:18:07.701154 containerd[1439]: 2025-07-11 00:18:07.643 [INFO][4443] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" Jul 11 00:18:07.701154 containerd[1439]: 2025-07-11 00:18:07.671 [INFO][4484] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" HandleID="k8s-pod-network.9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" Workload="localhost-k8s-calico--apiserver--796d49478c--7n5wn-eth0" Jul 11 00:18:07.701154 containerd[1439]: 2025-07-11 00:18:07.672 [INFO][4484] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:18:07.701154 containerd[1439]: 2025-07-11 00:18:07.680 [INFO][4484] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:18:07.701154 containerd[1439]: 2025-07-11 00:18:07.691 [WARNING][4484] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" HandleID="k8s-pod-network.9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" Workload="localhost-k8s-calico--apiserver--796d49478c--7n5wn-eth0" Jul 11 00:18:07.701154 containerd[1439]: 2025-07-11 00:18:07.691 [INFO][4484] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" HandleID="k8s-pod-network.9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" Workload="localhost-k8s-calico--apiserver--796d49478c--7n5wn-eth0" Jul 11 00:18:07.701154 containerd[1439]: 2025-07-11 00:18:07.693 [INFO][4484] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:18:07.701154 containerd[1439]: 2025-07-11 00:18:07.696 [INFO][4443] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" Jul 11 00:18:07.702087 containerd[1439]: time="2025-07-11T00:18:07.701300184Z" level=info msg="TearDown network for sandbox \"9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb\" successfully" Jul 11 00:18:07.702087 containerd[1439]: time="2025-07-11T00:18:07.701324987Z" level=info msg="StopPodSandbox for \"9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb\" returns successfully" Jul 11 00:18:07.704428 containerd[1439]: time="2025-07-11T00:18:07.703996000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-796d49478c-7n5wn,Uid:6895268f-5207-4d25-89a2-65b99ac04608,Namespace:calico-apiserver,Attempt:1,}" Jul 11 00:18:07.709883 sshd[4358]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:07.715031 systemd[1]: sshd@7-10.0.0.77:22-10.0.0.1:38282.service: Deactivated successfully. Jul 11 00:18:07.718066 systemd[1]: session-8.scope: Deactivated successfully. Jul 11 00:18:07.721004 systemd-logind[1419]: Session 8 logged out. Waiting for processes to exit. Jul 11 00:18:07.723437 systemd-logind[1419]: Removed session 8. Jul 11 00:18:07.814593 systemd-networkd[1387]: cali2027f526111: Link UP Jul 11 00:18:07.819178 systemd-networkd[1387]: cali2027f526111: Gained carrier Jul 11 00:18:07.829929 kubelet[2470]: I0711 00:18:07.829308 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-t8xxb" podStartSLOduration=19.142506506 podStartE2EDuration="21.829287679s" podCreationTimestamp="2025-07-11 00:17:46 +0000 UTC" firstStartedPulling="2025-07-11 00:18:04.816185975 +0000 UTC m=+39.375905467" lastFinishedPulling="2025-07-11 00:18:07.502967108 +0000 UTC m=+42.062686640" observedRunningTime="2025-07-11 00:18:07.724134754 +0000 UTC m=+42.283854286" watchObservedRunningTime="2025-07-11 00:18:07.829287679 +0000 UTC m=+42.389007211" Jul 11 00:18:07.834122 containerd[1439]: 2025-07-11 00:18:07.746 [INFO][4495] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7667647d94--b4l6t-eth0 calico-kube-controllers-7667647d94- calico-system 9904d86a-2797-43dd-8a39-c9306c873001 1023 0 2025-07-11 00:17:46 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7667647d94 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7667647d94-b4l6t eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2027f526111 [] [] }} ContainerID="094a16a4089d195381793672ed3629856088a54c06e2085fc6826f65e86bf832" Namespace="calico-system" Pod="calico-kube-controllers-7667647d94-b4l6t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7667647d94--b4l6t-" Jul 11 00:18:07.834122 containerd[1439]: 2025-07-11 00:18:07.746 [INFO][4495] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="094a16a4089d195381793672ed3629856088a54c06e2085fc6826f65e86bf832" Namespace="calico-system" Pod="calico-kube-controllers-7667647d94-b4l6t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7667647d94--b4l6t-eth0" Jul 11 00:18:07.834122 containerd[1439]: 2025-07-11 00:18:07.776 [INFO][4524] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="094a16a4089d195381793672ed3629856088a54c06e2085fc6826f65e86bf832" HandleID="k8s-pod-network.094a16a4089d195381793672ed3629856088a54c06e2085fc6826f65e86bf832" Workload="localhost-k8s-calico--kube--controllers--7667647d94--b4l6t-eth0" Jul 11 00:18:07.834122 containerd[1439]: 2025-07-11 00:18:07.776 [INFO][4524] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="094a16a4089d195381793672ed3629856088a54c06e2085fc6826f65e86bf832" HandleID="k8s-pod-network.094a16a4089d195381793672ed3629856088a54c06e2085fc6826f65e86bf832" Workload="localhost-k8s-calico--kube--controllers--7667647d94--b4l6t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400058e4b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7667647d94-b4l6t", "timestamp":"2025-07-11 00:18:07.776529614 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:18:07.834122 containerd[1439]: 2025-07-11 00:18:07.776 [INFO][4524] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:18:07.834122 containerd[1439]: 2025-07-11 00:18:07.776 [INFO][4524] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:18:07.834122 containerd[1439]: 2025-07-11 00:18:07.776 [INFO][4524] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:18:07.834122 containerd[1439]: 2025-07-11 00:18:07.786 [INFO][4524] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.094a16a4089d195381793672ed3629856088a54c06e2085fc6826f65e86bf832" host="localhost" Jul 11 00:18:07.834122 containerd[1439]: 2025-07-11 00:18:07.790 [INFO][4524] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:18:07.834122 containerd[1439]: 2025-07-11 00:18:07.794 [INFO][4524] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:18:07.834122 containerd[1439]: 2025-07-11 00:18:07.796 [INFO][4524] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:18:07.834122 containerd[1439]: 2025-07-11 00:18:07.798 [INFO][4524] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:18:07.834122 containerd[1439]: 2025-07-11 00:18:07.798 [INFO][4524] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.094a16a4089d195381793672ed3629856088a54c06e2085fc6826f65e86bf832" host="localhost" Jul 11 00:18:07.834122 containerd[1439]: 2025-07-11 00:18:07.800 [INFO][4524] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.094a16a4089d195381793672ed3629856088a54c06e2085fc6826f65e86bf832 Jul 11 00:18:07.834122 containerd[1439]: 2025-07-11 00:18:07.803 [INFO][4524] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.094a16a4089d195381793672ed3629856088a54c06e2085fc6826f65e86bf832" host="localhost" Jul 11 00:18:07.834122 containerd[1439]: 2025-07-11 00:18:07.808 [INFO][4524] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.094a16a4089d195381793672ed3629856088a54c06e2085fc6826f65e86bf832" host="localhost" Jul 11 00:18:07.834122 containerd[1439]: 2025-07-11 00:18:07.808 [INFO][4524] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.094a16a4089d195381793672ed3629856088a54c06e2085fc6826f65e86bf832" host="localhost" Jul 11 00:18:07.834122 containerd[1439]: 2025-07-11 00:18:07.808 [INFO][4524] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:18:07.834122 containerd[1439]: 2025-07-11 00:18:07.808 [INFO][4524] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="094a16a4089d195381793672ed3629856088a54c06e2085fc6826f65e86bf832" HandleID="k8s-pod-network.094a16a4089d195381793672ed3629856088a54c06e2085fc6826f65e86bf832" Workload="localhost-k8s-calico--kube--controllers--7667647d94--b4l6t-eth0" Jul 11 00:18:07.834683 containerd[1439]: 2025-07-11 00:18:07.811 [INFO][4495] cni-plugin/k8s.go 418: Populated endpoint ContainerID="094a16a4089d195381793672ed3629856088a54c06e2085fc6826f65e86bf832" Namespace="calico-system" Pod="calico-kube-controllers-7667647d94-b4l6t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7667647d94--b4l6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7667647d94--b4l6t-eth0", GenerateName:"calico-kube-controllers-7667647d94-", Namespace:"calico-system", SelfLink:"", UID:"9904d86a-2797-43dd-8a39-c9306c873001", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 17, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7667647d94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7667647d94-b4l6t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2027f526111", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:18:07.834683 containerd[1439]: 2025-07-11 00:18:07.811 [INFO][4495] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="094a16a4089d195381793672ed3629856088a54c06e2085fc6826f65e86bf832" Namespace="calico-system" Pod="calico-kube-controllers-7667647d94-b4l6t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7667647d94--b4l6t-eth0" Jul 11 00:18:07.834683 containerd[1439]: 2025-07-11 00:18:07.811 [INFO][4495] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2027f526111 ContainerID="094a16a4089d195381793672ed3629856088a54c06e2085fc6826f65e86bf832" Namespace="calico-system" Pod="calico-kube-controllers-7667647d94-b4l6t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7667647d94--b4l6t-eth0" Jul 11 00:18:07.834683 containerd[1439]: 2025-07-11 00:18:07.819 [INFO][4495] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="094a16a4089d195381793672ed3629856088a54c06e2085fc6826f65e86bf832" Namespace="calico-system" Pod="calico-kube-controllers-7667647d94-b4l6t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7667647d94--b4l6t-eth0" Jul 11 00:18:07.834683 containerd[1439]: 2025-07-11 00:18:07.819 [INFO][4495] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="094a16a4089d195381793672ed3629856088a54c06e2085fc6826f65e86bf832" Namespace="calico-system" Pod="calico-kube-controllers-7667647d94-b4l6t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7667647d94--b4l6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7667647d94--b4l6t-eth0", GenerateName:"calico-kube-controllers-7667647d94-", Namespace:"calico-system", SelfLink:"", UID:"9904d86a-2797-43dd-8a39-c9306c873001", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 17, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7667647d94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"094a16a4089d195381793672ed3629856088a54c06e2085fc6826f65e86bf832", Pod:"calico-kube-controllers-7667647d94-b4l6t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2027f526111", MAC:"c2:ec:92:9d:50:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:18:07.834683 containerd[1439]: 2025-07-11 00:18:07.828 [INFO][4495] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="094a16a4089d195381793672ed3629856088a54c06e2085fc6826f65e86bf832" Namespace="calico-system" Pod="calico-kube-controllers-7667647d94-b4l6t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7667647d94--b4l6t-eth0" Jul 11 00:18:07.849892 containerd[1439]: time="2025-07-11T00:18:07.849464878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:18:07.850027 containerd[1439]: time="2025-07-11T00:18:07.849863567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:18:07.850027 containerd[1439]: time="2025-07-11T00:18:07.849946378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:18:07.850085 containerd[1439]: time="2025-07-11T00:18:07.850032349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:18:07.871644 systemd[1]: Started cri-containerd-094a16a4089d195381793672ed3629856088a54c06e2085fc6826f65e86bf832.scope - libcontainer container 094a16a4089d195381793672ed3629856088a54c06e2085fc6826f65e86bf832. Jul 11 00:18:07.882688 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:18:07.926531 systemd-networkd[1387]: calib03773e7d46: Link UP Jul 11 00:18:07.928039 containerd[1439]: time="2025-07-11T00:18:07.927710004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7667647d94-b4l6t,Uid:9904d86a-2797-43dd-8a39-c9306c873001,Namespace:calico-system,Attempt:1,} returns sandbox id \"094a16a4089d195381793672ed3629856088a54c06e2085fc6826f65e86bf832\"" Jul 11 00:18:07.930441 containerd[1439]: time="2025-07-11T00:18:07.929714295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 11 00:18:07.930166 systemd-networkd[1387]: calib03773e7d46: Gained carrier Jul 11 00:18:07.945415 containerd[1439]: 2025-07-11 00:18:07.762 [INFO][4510] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--796d49478c--7n5wn-eth0 calico-apiserver-796d49478c- calico-apiserver 6895268f-5207-4d25-89a2-65b99ac04608 1024 0 2025-07-11 00:17:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:796d49478c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-796d49478c-7n5wn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib03773e7d46 [] [] }} ContainerID="6c832201846a023961aa43caa48dae433c70385fe09759d5487467e2d3f29984" Namespace="calico-apiserver" Pod="calico-apiserver-796d49478c-7n5wn" WorkloadEndpoint="localhost-k8s-calico--apiserver--796d49478c--7n5wn-" Jul 11 00:18:07.945415 containerd[1439]: 2025-07-11 00:18:07.762 [INFO][4510] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6c832201846a023961aa43caa48dae433c70385fe09759d5487467e2d3f29984" Namespace="calico-apiserver" Pod="calico-apiserver-796d49478c-7n5wn" WorkloadEndpoint="localhost-k8s-calico--apiserver--796d49478c--7n5wn-eth0" Jul 11 00:18:07.945415 containerd[1439]: 2025-07-11 00:18:07.785 [INFO][4533] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c832201846a023961aa43caa48dae433c70385fe09759d5487467e2d3f29984" HandleID="k8s-pod-network.6c832201846a023961aa43caa48dae433c70385fe09759d5487467e2d3f29984" Workload="localhost-k8s-calico--apiserver--796d49478c--7n5wn-eth0" Jul 11 00:18:07.945415 containerd[1439]: 2025-07-11 00:18:07.786 [INFO][4533] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6c832201846a023961aa43caa48dae433c70385fe09759d5487467e2d3f29984" HandleID="k8s-pod-network.6c832201846a023961aa43caa48dae433c70385fe09759d5487467e2d3f29984" Workload="localhost-k8s-calico--apiserver--796d49478c--7n5wn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136e30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-796d49478c-7n5wn", "timestamp":"2025-07-11 00:18:07.785940789 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:18:07.945415 containerd[1439]: 2025-07-11 00:18:07.786 [INFO][4533] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:18:07.945415 containerd[1439]: 2025-07-11 00:18:07.808 [INFO][4533] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:18:07.945415 containerd[1439]: 2025-07-11 00:18:07.808 [INFO][4533] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:18:07.945415 containerd[1439]: 2025-07-11 00:18:07.891 [INFO][4533] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6c832201846a023961aa43caa48dae433c70385fe09759d5487467e2d3f29984" host="localhost" Jul 11 00:18:07.945415 containerd[1439]: 2025-07-11 00:18:07.895 [INFO][4533] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:18:07.945415 containerd[1439]: 2025-07-11 00:18:07.899 [INFO][4533] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:18:07.945415 containerd[1439]: 2025-07-11 00:18:07.901 [INFO][4533] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:18:07.945415 containerd[1439]: 2025-07-11 00:18:07.903 [INFO][4533] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:18:07.945415 containerd[1439]: 2025-07-11 00:18:07.903 [INFO][4533] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6c832201846a023961aa43caa48dae433c70385fe09759d5487467e2d3f29984" host="localhost" Jul 11 00:18:07.945415 containerd[1439]: 2025-07-11 00:18:07.905 [INFO][4533] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6c832201846a023961aa43caa48dae433c70385fe09759d5487467e2d3f29984 Jul 11 00:18:07.945415 containerd[1439]: 2025-07-11 00:18:07.910 [INFO][4533] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6c832201846a023961aa43caa48dae433c70385fe09759d5487467e2d3f29984" host="localhost" Jul 11 00:18:07.945415 containerd[1439]: 2025-07-11 00:18:07.916 [INFO][4533] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.6c832201846a023961aa43caa48dae433c70385fe09759d5487467e2d3f29984" host="localhost" Jul 11 00:18:07.945415 containerd[1439]: 2025-07-11 00:18:07.916 [INFO][4533] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.6c832201846a023961aa43caa48dae433c70385fe09759d5487467e2d3f29984" host="localhost" Jul 11 00:18:07.945415 containerd[1439]: 2025-07-11 00:18:07.916 [INFO][4533] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:18:07.945415 containerd[1439]: 2025-07-11 00:18:07.916 [INFO][4533] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="6c832201846a023961aa43caa48dae433c70385fe09759d5487467e2d3f29984" HandleID="k8s-pod-network.6c832201846a023961aa43caa48dae433c70385fe09759d5487467e2d3f29984" Workload="localhost-k8s-calico--apiserver--796d49478c--7n5wn-eth0" Jul 11 00:18:07.946105 containerd[1439]: 2025-07-11 00:18:07.922 [INFO][4510] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6c832201846a023961aa43caa48dae433c70385fe09759d5487467e2d3f29984" Namespace="calico-apiserver" Pod="calico-apiserver-796d49478c-7n5wn" WorkloadEndpoint="localhost-k8s-calico--apiserver--796d49478c--7n5wn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--796d49478c--7n5wn-eth0", GenerateName:"calico-apiserver-796d49478c-", Namespace:"calico-apiserver", SelfLink:"", UID:"6895268f-5207-4d25-89a2-65b99ac04608", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 17, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"796d49478c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-796d49478c-7n5wn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib03773e7d46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:18:07.946105 containerd[1439]: 2025-07-11 00:18:07.922 [INFO][4510] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="6c832201846a023961aa43caa48dae433c70385fe09759d5487467e2d3f29984" Namespace="calico-apiserver" Pod="calico-apiserver-796d49478c-7n5wn" WorkloadEndpoint="localhost-k8s-calico--apiserver--796d49478c--7n5wn-eth0" Jul 11 00:18:07.946105 containerd[1439]: 2025-07-11 00:18:07.922 [INFO][4510] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib03773e7d46 ContainerID="6c832201846a023961aa43caa48dae433c70385fe09759d5487467e2d3f29984" Namespace="calico-apiserver" Pod="calico-apiserver-796d49478c-7n5wn" WorkloadEndpoint="localhost-k8s-calico--apiserver--796d49478c--7n5wn-eth0" Jul 11 00:18:07.946105 containerd[1439]: 2025-07-11 00:18:07.928 [INFO][4510] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6c832201846a023961aa43caa48dae433c70385fe09759d5487467e2d3f29984" Namespace="calico-apiserver" Pod="calico-apiserver-796d49478c-7n5wn" WorkloadEndpoint="localhost-k8s-calico--apiserver--796d49478c--7n5wn-eth0" Jul 11 00:18:07.946105 containerd[1439]: 2025-07-11 00:18:07.931 [INFO][4510] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6c832201846a023961aa43caa48dae433c70385fe09759d5487467e2d3f29984" Namespace="calico-apiserver" Pod="calico-apiserver-796d49478c-7n5wn" WorkloadEndpoint="localhost-k8s-calico--apiserver--796d49478c--7n5wn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--796d49478c--7n5wn-eth0", GenerateName:"calico-apiserver-796d49478c-", Namespace:"calico-apiserver", SelfLink:"", UID:"6895268f-5207-4d25-89a2-65b99ac04608", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 17, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"796d49478c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6c832201846a023961aa43caa48dae433c70385fe09759d5487467e2d3f29984", Pod:"calico-apiserver-796d49478c-7n5wn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib03773e7d46", MAC:"fe:eb:71:85:84:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:18:07.946105 containerd[1439]: 2025-07-11 00:18:07.942 [INFO][4510] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6c832201846a023961aa43caa48dae433c70385fe09759d5487467e2d3f29984" Namespace="calico-apiserver" Pod="calico-apiserver-796d49478c-7n5wn" WorkloadEndpoint="localhost-k8s-calico--apiserver--796d49478c--7n5wn-eth0" Jul 11 00:18:07.964878 containerd[1439]: time="2025-07-11T00:18:07.964496876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:18:07.964878 containerd[1439]: time="2025-07-11T00:18:07.964554763Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:18:07.964878 containerd[1439]: time="2025-07-11T00:18:07.964579246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:18:07.964878 containerd[1439]: time="2025-07-11T00:18:07.964670818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:18:07.987583 systemd[1]: Started cri-containerd-6c832201846a023961aa43caa48dae433c70385fe09759d5487467e2d3f29984.scope - libcontainer container 6c832201846a023961aa43caa48dae433c70385fe09759d5487467e2d3f29984. Jul 11 00:18:08.007717 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:18:08.028835 containerd[1439]: time="2025-07-11T00:18:08.028776589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-796d49478c-7n5wn,Uid:6895268f-5207-4d25-89a2-65b99ac04608,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6c832201846a023961aa43caa48dae433c70385fe09759d5487467e2d3f29984\"" Jul 11 00:18:08.161258 systemd[1]: run-containerd-runc-k8s.io-53a0d36a1d4b8c07e90a5967dca23f9942c381c21f255c96f2b19fd0a13aa0ca-runc.WA1vPU.mount: Deactivated successfully. Jul 11 00:18:08.161354 systemd[1]: run-netns-cni\x2ddbc9f450\x2d432d\x2de0aa\x2d24f8\x2d61846e90f092.mount: Deactivated successfully. Jul 11 00:18:08.161430 systemd[1]: run-netns-cni\x2d35ee3fd6\x2d287d\x2df8ea\x2d240b\x2d407e7279b30a.mount: Deactivated successfully. Jul 11 00:18:08.529255 containerd[1439]: time="2025-07-11T00:18:08.529209554Z" level=info msg="StopPodSandbox for \"94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7\"" Jul 11 00:18:08.529885 containerd[1439]: time="2025-07-11T00:18:08.529744340Z" level=info msg="StopPodSandbox for \"640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d\"" Jul 11 00:18:08.530618 containerd[1439]: time="2025-07-11T00:18:08.530317930Z" level=info msg="StopPodSandbox for \"197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547\"" Jul 11 00:18:08.645778 containerd[1439]: 2025-07-11 00:18:08.584 [INFO][4682] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" Jul 11 00:18:08.645778 containerd[1439]: 2025-07-11 00:18:08.584 [INFO][4682] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" iface="eth0" netns="/var/run/netns/cni-3084db9f-8b57-544c-3b93-ac9032d01cc9" Jul 11 00:18:08.645778 containerd[1439]: 2025-07-11 00:18:08.585 [INFO][4682] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" iface="eth0" netns="/var/run/netns/cni-3084db9f-8b57-544c-3b93-ac9032d01cc9" Jul 11 00:18:08.645778 containerd[1439]: 2025-07-11 00:18:08.586 [INFO][4682] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" iface="eth0" netns="/var/run/netns/cni-3084db9f-8b57-544c-3b93-ac9032d01cc9" Jul 11 00:18:08.645778 containerd[1439]: 2025-07-11 00:18:08.586 [INFO][4682] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" Jul 11 00:18:08.645778 containerd[1439]: 2025-07-11 00:18:08.586 [INFO][4682] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" Jul 11 00:18:08.645778 containerd[1439]: 2025-07-11 00:18:08.620 [INFO][4706] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" HandleID="k8s-pod-network.94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" Workload="localhost-k8s-coredns--668d6bf9bc--tbbr8-eth0" Jul 11 00:18:08.645778 containerd[1439]: 2025-07-11 00:18:08.621 [INFO][4706] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:18:08.645778 containerd[1439]: 2025-07-11 00:18:08.621 [INFO][4706] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:18:08.645778 containerd[1439]: 2025-07-11 00:18:08.632 [WARNING][4706] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" HandleID="k8s-pod-network.94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" Workload="localhost-k8s-coredns--668d6bf9bc--tbbr8-eth0" Jul 11 00:18:08.645778 containerd[1439]: 2025-07-11 00:18:08.632 [INFO][4706] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" HandleID="k8s-pod-network.94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" Workload="localhost-k8s-coredns--668d6bf9bc--tbbr8-eth0" Jul 11 00:18:08.645778 containerd[1439]: 2025-07-11 00:18:08.635 [INFO][4706] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:18:08.645778 containerd[1439]: 2025-07-11 00:18:08.639 [INFO][4682] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" Jul 11 00:18:08.649852 systemd[1]: run-netns-cni\x2d3084db9f\x2d8b57\x2d544c\x2d3b93\x2dac9032d01cc9.mount: Deactivated successfully. Jul 11 00:18:08.650358 containerd[1439]: time="2025-07-11T00:18:08.650313996Z" level=info msg="TearDown network for sandbox \"94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7\" successfully" Jul 11 00:18:08.650358 containerd[1439]: time="2025-07-11T00:18:08.650354521Z" level=info msg="StopPodSandbox for \"94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7\" returns successfully" Jul 11 00:18:08.651398 kubelet[2470]: E0711 00:18:08.650692 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:08.652882 containerd[1439]: time="2025-07-11T00:18:08.651842143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tbbr8,Uid:e1f2b193-85e7-4131-903d-0d058505c956,Namespace:kube-system,Attempt:1,}" Jul 11 00:18:08.658319 kubelet[2470]: I0711 00:18:08.657335 2470 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 11 00:18:08.659108 kubelet[2470]: I0711 00:18:08.659086 2470 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 11 00:18:08.701981 containerd[1439]: 2025-07-11 00:18:08.617 [INFO][4684] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" Jul 11 00:18:08.701981 containerd[1439]: 2025-07-11 00:18:08.617 [INFO][4684] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" iface="eth0" netns="/var/run/netns/cni-c58a4c2f-4b56-1faf-fb41-a98f5205443d" Jul 11 00:18:08.701981 containerd[1439]: 2025-07-11 00:18:08.617 [INFO][4684] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" iface="eth0" netns="/var/run/netns/cni-c58a4c2f-4b56-1faf-fb41-a98f5205443d" Jul 11 00:18:08.701981 containerd[1439]: 2025-07-11 00:18:08.619 [INFO][4684] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" iface="eth0" netns="/var/run/netns/cni-c58a4c2f-4b56-1faf-fb41-a98f5205443d" Jul 11 00:18:08.701981 containerd[1439]: 2025-07-11 00:18:08.619 [INFO][4684] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" Jul 11 00:18:08.701981 containerd[1439]: 2025-07-11 00:18:08.619 [INFO][4684] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" Jul 11 00:18:08.701981 containerd[1439]: 2025-07-11 00:18:08.678 [INFO][4717] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" HandleID="k8s-pod-network.197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" Workload="localhost-k8s-goldmane--768f4c5c69--8rxq7-eth0" Jul 11 00:18:08.701981 containerd[1439]: 2025-07-11 00:18:08.679 [INFO][4717] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:18:08.701981 containerd[1439]: 2025-07-11 00:18:08.679 [INFO][4717] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:18:08.701981 containerd[1439]: 2025-07-11 00:18:08.691 [WARNING][4717] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" HandleID="k8s-pod-network.197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" Workload="localhost-k8s-goldmane--768f4c5c69--8rxq7-eth0" Jul 11 00:18:08.701981 containerd[1439]: 2025-07-11 00:18:08.691 [INFO][4717] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" HandleID="k8s-pod-network.197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" Workload="localhost-k8s-goldmane--768f4c5c69--8rxq7-eth0" Jul 11 00:18:08.701981 containerd[1439]: 2025-07-11 00:18:08.695 [INFO][4717] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:18:08.701981 containerd[1439]: 2025-07-11 00:18:08.698 [INFO][4684] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" Jul 11 00:18:08.705235 containerd[1439]: time="2025-07-11T00:18:08.703524740Z" level=info msg="TearDown network for sandbox \"197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547\" successfully" Jul 11 00:18:08.705235 containerd[1439]: time="2025-07-11T00:18:08.703553703Z" level=info msg="StopPodSandbox for \"197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547\" returns successfully" Jul 11 00:18:08.705397 systemd[1]: run-netns-cni\x2dc58a4c2f\x2d4b56\x2d1faf\x2dfb41\x2da98f5205443d.mount: Deactivated successfully. Jul 11 00:18:08.706281 containerd[1439]: time="2025-07-11T00:18:08.706238152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-8rxq7,Uid:ebfb1a06-1477-48f5-805f-9808c5339795,Namespace:calico-system,Attempt:1,}" Jul 11 00:18:08.720615 containerd[1439]: 2025-07-11 00:18:08.656 [INFO][4683] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" Jul 11 00:18:08.720615 containerd[1439]: 2025-07-11 00:18:08.656 [INFO][4683] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" iface="eth0" netns="/var/run/netns/cni-b1339ad0-d907-4e3e-3dc5-aa5c8ddf5d6c" Jul 11 00:18:08.720615 containerd[1439]: 2025-07-11 00:18:08.657 [INFO][4683] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" iface="eth0" netns="/var/run/netns/cni-b1339ad0-d907-4e3e-3dc5-aa5c8ddf5d6c" Jul 11 00:18:08.720615 containerd[1439]: 2025-07-11 00:18:08.659 [INFO][4683] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" iface="eth0" netns="/var/run/netns/cni-b1339ad0-d907-4e3e-3dc5-aa5c8ddf5d6c" Jul 11 00:18:08.720615 containerd[1439]: 2025-07-11 00:18:08.659 [INFO][4683] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" Jul 11 00:18:08.720615 containerd[1439]: 2025-07-11 00:18:08.659 [INFO][4683] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" Jul 11 00:18:08.720615 containerd[1439]: 2025-07-11 00:18:08.689 [INFO][4724] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" HandleID="k8s-pod-network.640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" Workload="localhost-k8s-coredns--668d6bf9bc--cvpp7-eth0" Jul 11 00:18:08.720615 containerd[1439]: 2025-07-11 00:18:08.689 [INFO][4724] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:18:08.720615 containerd[1439]: 2025-07-11 00:18:08.696 [INFO][4724] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:18:08.720615 containerd[1439]: 2025-07-11 00:18:08.709 [WARNING][4724] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" HandleID="k8s-pod-network.640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" Workload="localhost-k8s-coredns--668d6bf9bc--cvpp7-eth0" Jul 11 00:18:08.720615 containerd[1439]: 2025-07-11 00:18:08.709 [INFO][4724] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" HandleID="k8s-pod-network.640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" Workload="localhost-k8s-coredns--668d6bf9bc--cvpp7-eth0" Jul 11 00:18:08.720615 containerd[1439]: 2025-07-11 00:18:08.712 [INFO][4724] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:18:08.720615 containerd[1439]: 2025-07-11 00:18:08.717 [INFO][4683] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" Jul 11 00:18:08.721716 containerd[1439]: time="2025-07-11T00:18:08.721130892Z" level=info msg="TearDown network for sandbox \"640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d\" successfully" Jul 11 00:18:08.721716 containerd[1439]: time="2025-07-11T00:18:08.721163776Z" level=info msg="StopPodSandbox for \"640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d\" returns successfully" Jul 11 00:18:08.721829 kubelet[2470]: E0711 00:18:08.721534 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:08.723297 containerd[1439]: time="2025-07-11T00:18:08.722626475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cvpp7,Uid:1aa6bb20-6a01-4656-8af5-3bf6153d0dfe,Namespace:kube-system,Attempt:1,}" Jul 11 00:18:08.855497 systemd-networkd[1387]: calicb8501a0ef6: Link UP Jul 11 00:18:08.856662 systemd-networkd[1387]: calicb8501a0ef6: Gained carrier Jul 11 00:18:08.882998 containerd[1439]: 2025-07-11 00:18:08.743 [INFO][4738] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--tbbr8-eth0 coredns-668d6bf9bc- kube-system e1f2b193-85e7-4131-903d-0d058505c956 1044 0 2025-07-11 00:17:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-tbbr8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicb8501a0ef6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="828688e4b97fb3d5ebf98841952064e6910349fa6b4fd84ddd64ef4ce741cbd0" Namespace="kube-system" Pod="coredns-668d6bf9bc-tbbr8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tbbr8-" Jul 11 00:18:08.882998 containerd[1439]: 2025-07-11 00:18:08.743 [INFO][4738] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="828688e4b97fb3d5ebf98841952064e6910349fa6b4fd84ddd64ef4ce741cbd0" Namespace="kube-system" Pod="coredns-668d6bf9bc-tbbr8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tbbr8-eth0" Jul 11 00:18:08.882998 containerd[1439]: 2025-07-11 00:18:08.793 [INFO][4777] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="828688e4b97fb3d5ebf98841952064e6910349fa6b4fd84ddd64ef4ce741cbd0" HandleID="k8s-pod-network.828688e4b97fb3d5ebf98841952064e6910349fa6b4fd84ddd64ef4ce741cbd0" Workload="localhost-k8s-coredns--668d6bf9bc--tbbr8-eth0" Jul 11 00:18:08.882998 containerd[1439]: 2025-07-11 00:18:08.794 [INFO][4777] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="828688e4b97fb3d5ebf98841952064e6910349fa6b4fd84ddd64ef4ce741cbd0" HandleID="k8s-pod-network.828688e4b97fb3d5ebf98841952064e6910349fa6b4fd84ddd64ef4ce741cbd0" Workload="localhost-k8s-coredns--668d6bf9bc--tbbr8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ddd80), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-tbbr8", "timestamp":"2025-07-11 00:18:08.793489576 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:18:08.882998 containerd[1439]: 2025-07-11 00:18:08.794 [INFO][4777] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:18:08.882998 containerd[1439]: 2025-07-11 00:18:08.794 [INFO][4777] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:18:08.882998 containerd[1439]: 2025-07-11 00:18:08.794 [INFO][4777] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:18:08.882998 containerd[1439]: 2025-07-11 00:18:08.819 [INFO][4777] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.828688e4b97fb3d5ebf98841952064e6910349fa6b4fd84ddd64ef4ce741cbd0" host="localhost" Jul 11 00:18:08.882998 containerd[1439]: 2025-07-11 00:18:08.824 [INFO][4777] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:18:08.882998 containerd[1439]: 2025-07-11 00:18:08.832 [INFO][4777] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:18:08.882998 containerd[1439]: 2025-07-11 00:18:08.834 [INFO][4777] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:18:08.882998 containerd[1439]: 2025-07-11 00:18:08.837 [INFO][4777] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:18:08.882998 containerd[1439]: 2025-07-11 00:18:08.837 [INFO][4777] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.828688e4b97fb3d5ebf98841952064e6910349fa6b4fd84ddd64ef4ce741cbd0" host="localhost" Jul 11 00:18:08.882998 containerd[1439]: 2025-07-11 00:18:08.839 [INFO][4777] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.828688e4b97fb3d5ebf98841952064e6910349fa6b4fd84ddd64ef4ce741cbd0 Jul 11 00:18:08.882998 containerd[1439]: 2025-07-11 00:18:08.843 [INFO][4777] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.828688e4b97fb3d5ebf98841952064e6910349fa6b4fd84ddd64ef4ce741cbd0" host="localhost" Jul 11 00:18:08.882998 containerd[1439]: 2025-07-11 00:18:08.850 [INFO][4777] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.828688e4b97fb3d5ebf98841952064e6910349fa6b4fd84ddd64ef4ce741cbd0" host="localhost" Jul 11 00:18:08.882998 containerd[1439]: 2025-07-11 00:18:08.850 [INFO][4777] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.828688e4b97fb3d5ebf98841952064e6910349fa6b4fd84ddd64ef4ce741cbd0" host="localhost" Jul 11 00:18:08.882998 containerd[1439]: 2025-07-11 00:18:08.850 [INFO][4777] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:18:08.882998 containerd[1439]: 2025-07-11 00:18:08.850 [INFO][4777] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="828688e4b97fb3d5ebf98841952064e6910349fa6b4fd84ddd64ef4ce741cbd0" HandleID="k8s-pod-network.828688e4b97fb3d5ebf98841952064e6910349fa6b4fd84ddd64ef4ce741cbd0" Workload="localhost-k8s-coredns--668d6bf9bc--tbbr8-eth0" Jul 11 00:18:08.883974 containerd[1439]: 2025-07-11 00:18:08.853 [INFO][4738] cni-plugin/k8s.go 418: Populated endpoint ContainerID="828688e4b97fb3d5ebf98841952064e6910349fa6b4fd84ddd64ef4ce741cbd0" Namespace="kube-system" Pod="coredns-668d6bf9bc-tbbr8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tbbr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--tbbr8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e1f2b193-85e7-4131-903d-0d058505c956", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 17, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-tbbr8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb8501a0ef6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:18:08.883974 containerd[1439]: 2025-07-11 00:18:08.853 [INFO][4738] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="828688e4b97fb3d5ebf98841952064e6910349fa6b4fd84ddd64ef4ce741cbd0" Namespace="kube-system" Pod="coredns-668d6bf9bc-tbbr8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tbbr8-eth0" Jul 11 00:18:08.883974 containerd[1439]: 2025-07-11 00:18:08.853 [INFO][4738] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb8501a0ef6 ContainerID="828688e4b97fb3d5ebf98841952064e6910349fa6b4fd84ddd64ef4ce741cbd0" Namespace="kube-system" Pod="coredns-668d6bf9bc-tbbr8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tbbr8-eth0" Jul 11 00:18:08.883974 containerd[1439]: 2025-07-11 00:18:08.855 [INFO][4738] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="828688e4b97fb3d5ebf98841952064e6910349fa6b4fd84ddd64ef4ce741cbd0" Namespace="kube-system" Pod="coredns-668d6bf9bc-tbbr8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tbbr8-eth0" Jul 11 00:18:08.883974 containerd[1439]: 2025-07-11 00:18:08.855 [INFO][4738] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="828688e4b97fb3d5ebf98841952064e6910349fa6b4fd84ddd64ef4ce741cbd0" Namespace="kube-system" Pod="coredns-668d6bf9bc-tbbr8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tbbr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--tbbr8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e1f2b193-85e7-4131-903d-0d058505c956", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 17, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"828688e4b97fb3d5ebf98841952064e6910349fa6b4fd84ddd64ef4ce741cbd0", Pod:"coredns-668d6bf9bc-tbbr8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb8501a0ef6", MAC:"ee:c3:0c:95:03:50", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:18:08.883974 containerd[1439]: 2025-07-11 00:18:08.870 [INFO][4738] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="828688e4b97fb3d5ebf98841952064e6910349fa6b4fd84ddd64ef4ce741cbd0" Namespace="kube-system" Pod="coredns-668d6bf9bc-tbbr8" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tbbr8-eth0" Jul 11 00:18:08.912518 containerd[1439]: time="2025-07-11T00:18:08.912084511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:18:08.912518 containerd[1439]: time="2025-07-11T00:18:08.912486280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:18:08.912518 containerd[1439]: time="2025-07-11T00:18:08.912503122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:18:08.912823 containerd[1439]: time="2025-07-11T00:18:08.912761074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:18:08.940536 systemd[1]: Started cri-containerd-828688e4b97fb3d5ebf98841952064e6910349fa6b4fd84ddd64ef4ce741cbd0.scope - libcontainer container 828688e4b97fb3d5ebf98841952064e6910349fa6b4fd84ddd64ef4ce741cbd0. Jul 11 00:18:08.954109 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:18:08.970824 containerd[1439]: time="2025-07-11T00:18:08.970774365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tbbr8,Uid:e1f2b193-85e7-4131-903d-0d058505c956,Namespace:kube-system,Attempt:1,} returns sandbox id \"828688e4b97fb3d5ebf98841952064e6910349fa6b4fd84ddd64ef4ce741cbd0\"" Jul 11 00:18:08.971543 kubelet[2470]: E0711 00:18:08.971520 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:08.974362 containerd[1439]: time="2025-07-11T00:18:08.974329479Z" level=info msg="CreateContainer within sandbox \"828688e4b97fb3d5ebf98841952064e6910349fa6b4fd84ddd64ef4ce741cbd0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:18:09.032297 systemd-networkd[1387]: cali307122bfa61: Link UP Jul 11 00:18:09.033625 systemd-networkd[1387]: cali307122bfa61: Gained carrier Jul 11 00:18:09.036876 systemd-networkd[1387]: cali2027f526111: Gained IPv6LL Jul 11 00:18:09.040954 containerd[1439]: time="2025-07-11T00:18:09.040910728Z" level=info msg="CreateContainer within sandbox \"828688e4b97fb3d5ebf98841952064e6910349fa6b4fd84ddd64ef4ce741cbd0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bd318e6d40c14304893a4fe0f252944148407dad2ca3a7e4f9ee904ad46f01cb\"" Jul 11 00:18:09.042106 containerd[1439]: time="2025-07-11T00:18:09.042076108Z" level=info msg="StartContainer for \"bd318e6d40c14304893a4fe0f252944148407dad2ca3a7e4f9ee904ad46f01cb\"" Jul 11 00:18:09.057110 containerd[1439]: 2025-07-11 00:18:08.777 [INFO][4745] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--8rxq7-eth0 goldmane-768f4c5c69- calico-system ebfb1a06-1477-48f5-805f-9808c5339795 1045 0 2025-07-11 00:17:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-8rxq7 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali307122bfa61 [] [] }} ContainerID="5e151982b6fc110d5b208e29e7ca8ab26794d3ece8140c205b25d2ff233cfe4a" Namespace="calico-system" Pod="goldmane-768f4c5c69-8rxq7" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--8rxq7-" Jul 11 00:18:09.057110 containerd[1439]: 2025-07-11 00:18:08.777 [INFO][4745] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5e151982b6fc110d5b208e29e7ca8ab26794d3ece8140c205b25d2ff233cfe4a" Namespace="calico-system" Pod="goldmane-768f4c5c69-8rxq7" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--8rxq7-eth0" Jul 11 00:18:09.057110 containerd[1439]: 2025-07-11 00:18:08.821 [INFO][4786] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e151982b6fc110d5b208e29e7ca8ab26794d3ece8140c205b25d2ff233cfe4a" HandleID="k8s-pod-network.5e151982b6fc110d5b208e29e7ca8ab26794d3ece8140c205b25d2ff233cfe4a" Workload="localhost-k8s-goldmane--768f4c5c69--8rxq7-eth0" Jul 11 00:18:09.057110 containerd[1439]: 2025-07-11 00:18:08.821 [INFO][4786] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5e151982b6fc110d5b208e29e7ca8ab26794d3ece8140c205b25d2ff233cfe4a" HandleID="k8s-pod-network.5e151982b6fc110d5b208e29e7ca8ab26794d3ece8140c205b25d2ff233cfe4a" Workload="localhost-k8s-goldmane--768f4c5c69--8rxq7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003220a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-8rxq7", "timestamp":"2025-07-11 00:18:08.821076828 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:18:09.057110 containerd[1439]: 2025-07-11 00:18:08.822 [INFO][4786] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:18:09.057110 containerd[1439]: 2025-07-11 00:18:08.850 [INFO][4786] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:18:09.057110 containerd[1439]: 2025-07-11 00:18:08.850 [INFO][4786] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:18:09.057110 containerd[1439]: 2025-07-11 00:18:08.924 [INFO][4786] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5e151982b6fc110d5b208e29e7ca8ab26794d3ece8140c205b25d2ff233cfe4a" host="localhost" Jul 11 00:18:09.057110 containerd[1439]: 2025-07-11 00:18:08.938 [INFO][4786] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:18:09.057110 containerd[1439]: 2025-07-11 00:18:08.978 [INFO][4786] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:18:09.057110 containerd[1439]: 2025-07-11 00:18:08.987 [INFO][4786] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:18:09.057110 containerd[1439]: 2025-07-11 00:18:08.989 [INFO][4786] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:18:09.057110 containerd[1439]: 2025-07-11 00:18:08.989 [INFO][4786] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5e151982b6fc110d5b208e29e7ca8ab26794d3ece8140c205b25d2ff233cfe4a" host="localhost" Jul 11 00:18:09.057110 containerd[1439]: 2025-07-11 00:18:09.006 [INFO][4786] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5e151982b6fc110d5b208e29e7ca8ab26794d3ece8140c205b25d2ff233cfe4a Jul 11 00:18:09.057110 containerd[1439]: 2025-07-11 00:18:09.015 [INFO][4786] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5e151982b6fc110d5b208e29e7ca8ab26794d3ece8140c205b25d2ff233cfe4a" host="localhost" Jul 11 00:18:09.057110 containerd[1439]: 2025-07-11 00:18:09.025 [INFO][4786] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.5e151982b6fc110d5b208e29e7ca8ab26794d3ece8140c205b25d2ff233cfe4a" host="localhost" Jul 11 00:18:09.057110 containerd[1439]: 2025-07-11 00:18:09.025 [INFO][4786] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.5e151982b6fc110d5b208e29e7ca8ab26794d3ece8140c205b25d2ff233cfe4a" host="localhost" Jul 11 00:18:09.057110 containerd[1439]: 2025-07-11 00:18:09.025 [INFO][4786] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:18:09.057110 containerd[1439]: 2025-07-11 00:18:09.025 [INFO][4786] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="5e151982b6fc110d5b208e29e7ca8ab26794d3ece8140c205b25d2ff233cfe4a" HandleID="k8s-pod-network.5e151982b6fc110d5b208e29e7ca8ab26794d3ece8140c205b25d2ff233cfe4a" Workload="localhost-k8s-goldmane--768f4c5c69--8rxq7-eth0" Jul 11 00:18:09.058052 containerd[1439]: 2025-07-11 00:18:09.028 [INFO][4745] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5e151982b6fc110d5b208e29e7ca8ab26794d3ece8140c205b25d2ff233cfe4a" Namespace="calico-system" Pod="goldmane-768f4c5c69-8rxq7" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--8rxq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--8rxq7-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"ebfb1a06-1477-48f5-805f-9808c5339795", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 17, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-8rxq7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali307122bfa61", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:18:09.058052 containerd[1439]: 2025-07-11 00:18:09.029 [INFO][4745] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="5e151982b6fc110d5b208e29e7ca8ab26794d3ece8140c205b25d2ff233cfe4a" Namespace="calico-system" Pod="goldmane-768f4c5c69-8rxq7" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--8rxq7-eth0" Jul 11 00:18:09.058052 containerd[1439]: 2025-07-11 00:18:09.029 [INFO][4745] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali307122bfa61 ContainerID="5e151982b6fc110d5b208e29e7ca8ab26794d3ece8140c205b25d2ff233cfe4a" Namespace="calico-system" Pod="goldmane-768f4c5c69-8rxq7" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--8rxq7-eth0" Jul 11 00:18:09.058052 containerd[1439]: 2025-07-11 00:18:09.032 [INFO][4745] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e151982b6fc110d5b208e29e7ca8ab26794d3ece8140c205b25d2ff233cfe4a" Namespace="calico-system" Pod="goldmane-768f4c5c69-8rxq7" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--8rxq7-eth0" Jul 11 00:18:09.058052 containerd[1439]: 2025-07-11 00:18:09.032 [INFO][4745] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5e151982b6fc110d5b208e29e7ca8ab26794d3ece8140c205b25d2ff233cfe4a" Namespace="calico-system" Pod="goldmane-768f4c5c69-8rxq7" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--8rxq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--8rxq7-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"ebfb1a06-1477-48f5-805f-9808c5339795", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 17, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e151982b6fc110d5b208e29e7ca8ab26794d3ece8140c205b25d2ff233cfe4a", Pod:"goldmane-768f4c5c69-8rxq7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali307122bfa61", MAC:"f2:ba:0d:ad:6c:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:18:09.058052 containerd[1439]: 2025-07-11 00:18:09.048 [INFO][4745] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5e151982b6fc110d5b208e29e7ca8ab26794d3ece8140c205b25d2ff233cfe4a" Namespace="calico-system" Pod="goldmane-768f4c5c69-8rxq7" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--8rxq7-eth0" Jul 11 00:18:09.089707 systemd[1]: Started cri-containerd-bd318e6d40c14304893a4fe0f252944148407dad2ca3a7e4f9ee904ad46f01cb.scope - libcontainer container bd318e6d40c14304893a4fe0f252944148407dad2ca3a7e4f9ee904ad46f01cb. Jul 11 00:18:09.095569 containerd[1439]: time="2025-07-11T00:18:09.095452182Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:18:09.095569 containerd[1439]: time="2025-07-11T00:18:09.095518950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:18:09.096178 containerd[1439]: time="2025-07-11T00:18:09.095533232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:18:09.096178 containerd[1439]: time="2025-07-11T00:18:09.096123502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:18:09.113707 systemd-networkd[1387]: calif69e3728438: Link UP Jul 11 00:18:09.116805 systemd-networkd[1387]: calif69e3728438: Gained carrier Jul 11 00:18:09.118670 systemd[1]: Started cri-containerd-5e151982b6fc110d5b208e29e7ca8ab26794d3ece8140c205b25d2ff233cfe4a.scope - libcontainer container 5e151982b6fc110d5b208e29e7ca8ab26794d3ece8140c205b25d2ff233cfe4a. Jul 11 00:18:09.142509 containerd[1439]: 2025-07-11 00:18:08.796 [INFO][4762] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--cvpp7-eth0 coredns-668d6bf9bc- kube-system 1aa6bb20-6a01-4656-8af5-3bf6153d0dfe 1046 0 2025-07-11 00:17:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-cvpp7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif69e3728438 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="76c6fd8ed03fca20ddb6bec4e7251c17d6915b36798d2bd4176b130ecef066ac" Namespace="kube-system" Pod="coredns-668d6bf9bc-cvpp7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cvpp7-" Jul 11 00:18:09.142509 containerd[1439]: 2025-07-11 00:18:08.797 [INFO][4762] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="76c6fd8ed03fca20ddb6bec4e7251c17d6915b36798d2bd4176b130ecef066ac" Namespace="kube-system" Pod="coredns-668d6bf9bc-cvpp7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cvpp7-eth0" Jul 11 00:18:09.142509 containerd[1439]: 2025-07-11 00:18:08.830 [INFO][4795] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="76c6fd8ed03fca20ddb6bec4e7251c17d6915b36798d2bd4176b130ecef066ac" HandleID="k8s-pod-network.76c6fd8ed03fca20ddb6bec4e7251c17d6915b36798d2bd4176b130ecef066ac" Workload="localhost-k8s-coredns--668d6bf9bc--cvpp7-eth0" Jul 11 00:18:09.142509 containerd[1439]: 2025-07-11 00:18:08.830 [INFO][4795] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="76c6fd8ed03fca20ddb6bec4e7251c17d6915b36798d2bd4176b130ecef066ac" HandleID="k8s-pod-network.76c6fd8ed03fca20ddb6bec4e7251c17d6915b36798d2bd4176b130ecef066ac" Workload="localhost-k8s-coredns--668d6bf9bc--cvpp7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000320b20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-cvpp7", "timestamp":"2025-07-11 00:18:08.830244228 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:18:09.142509 containerd[1439]: 2025-07-11 00:18:08.830 [INFO][4795] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:18:09.142509 containerd[1439]: 2025-07-11 00:18:09.025 [INFO][4795] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:18:09.142509 containerd[1439]: 2025-07-11 00:18:09.026 [INFO][4795] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:18:09.142509 containerd[1439]: 2025-07-11 00:18:09.044 [INFO][4795] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.76c6fd8ed03fca20ddb6bec4e7251c17d6915b36798d2bd4176b130ecef066ac" host="localhost" Jul 11 00:18:09.142509 containerd[1439]: 2025-07-11 00:18:09.057 [INFO][4795] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:18:09.142509 containerd[1439]: 2025-07-11 00:18:09.070 [INFO][4795] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:18:09.142509 containerd[1439]: 2025-07-11 00:18:09.077 [INFO][4795] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:18:09.142509 containerd[1439]: 2025-07-11 00:18:09.082 [INFO][4795] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:18:09.142509 containerd[1439]: 2025-07-11 00:18:09.083 [INFO][4795] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.76c6fd8ed03fca20ddb6bec4e7251c17d6915b36798d2bd4176b130ecef066ac" host="localhost" Jul 11 00:18:09.142509 containerd[1439]: 2025-07-11 00:18:09.086 [INFO][4795] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.76c6fd8ed03fca20ddb6bec4e7251c17d6915b36798d2bd4176b130ecef066ac Jul 11 00:18:09.142509 containerd[1439]: 2025-07-11 00:18:09.091 [INFO][4795] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.76c6fd8ed03fca20ddb6bec4e7251c17d6915b36798d2bd4176b130ecef066ac" host="localhost" Jul 11 00:18:09.142509 containerd[1439]: 2025-07-11 00:18:09.100 [INFO][4795] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.76c6fd8ed03fca20ddb6bec4e7251c17d6915b36798d2bd4176b130ecef066ac" host="localhost" Jul 11 00:18:09.142509 containerd[1439]: 2025-07-11 00:18:09.100 [INFO][4795] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.76c6fd8ed03fca20ddb6bec4e7251c17d6915b36798d2bd4176b130ecef066ac" host="localhost" Jul 11 00:18:09.142509 containerd[1439]: 2025-07-11 00:18:09.100 [INFO][4795] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:18:09.142509 containerd[1439]: 2025-07-11 00:18:09.100 [INFO][4795] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="76c6fd8ed03fca20ddb6bec4e7251c17d6915b36798d2bd4176b130ecef066ac" HandleID="k8s-pod-network.76c6fd8ed03fca20ddb6bec4e7251c17d6915b36798d2bd4176b130ecef066ac" Workload="localhost-k8s-coredns--668d6bf9bc--cvpp7-eth0" Jul 11 00:18:09.143245 containerd[1439]: 2025-07-11 00:18:09.110 [INFO][4762] cni-plugin/k8s.go 418: Populated endpoint ContainerID="76c6fd8ed03fca20ddb6bec4e7251c17d6915b36798d2bd4176b130ecef066ac" Namespace="kube-system" Pod="coredns-668d6bf9bc-cvpp7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cvpp7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--cvpp7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1aa6bb20-6a01-4656-8af5-3bf6153d0dfe", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 17, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-cvpp7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif69e3728438", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:18:09.143245 containerd[1439]: 2025-07-11 00:18:09.110 [INFO][4762] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="76c6fd8ed03fca20ddb6bec4e7251c17d6915b36798d2bd4176b130ecef066ac" Namespace="kube-system" Pod="coredns-668d6bf9bc-cvpp7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cvpp7-eth0" Jul 11 00:18:09.143245 containerd[1439]: 2025-07-11 00:18:09.110 [INFO][4762] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif69e3728438 ContainerID="76c6fd8ed03fca20ddb6bec4e7251c17d6915b36798d2bd4176b130ecef066ac" Namespace="kube-system" Pod="coredns-668d6bf9bc-cvpp7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cvpp7-eth0" Jul 11 00:18:09.143245 containerd[1439]: 2025-07-11 00:18:09.116 [INFO][4762] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="76c6fd8ed03fca20ddb6bec4e7251c17d6915b36798d2bd4176b130ecef066ac" Namespace="kube-system" Pod="coredns-668d6bf9bc-cvpp7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cvpp7-eth0" Jul 11 00:18:09.143245 containerd[1439]: 2025-07-11 00:18:09.116 [INFO][4762] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="76c6fd8ed03fca20ddb6bec4e7251c17d6915b36798d2bd4176b130ecef066ac" Namespace="kube-system" Pod="coredns-668d6bf9bc-cvpp7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cvpp7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--cvpp7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1aa6bb20-6a01-4656-8af5-3bf6153d0dfe", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 17, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"76c6fd8ed03fca20ddb6bec4e7251c17d6915b36798d2bd4176b130ecef066ac", Pod:"coredns-668d6bf9bc-cvpp7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif69e3728438", MAC:"ae:be:a3:b7:ad:c1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:18:09.143245 containerd[1439]: 2025-07-11 00:18:09.136 [INFO][4762] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="76c6fd8ed03fca20ddb6bec4e7251c17d6915b36798d2bd4176b130ecef066ac" Namespace="kube-system" Pod="coredns-668d6bf9bc-cvpp7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--cvpp7-eth0" Jul 11 00:18:09.164927 systemd[1]: run-netns-cni\x2db1339ad0\x2dd907\x2d4e3e\x2d3dc5\x2daa5c8ddf5d6c.mount: Deactivated successfully. Jul 11 00:18:09.173499 containerd[1439]: time="2025-07-11T00:18:09.173128407Z" level=info msg="StartContainer for \"bd318e6d40c14304893a4fe0f252944148407dad2ca3a7e4f9ee904ad46f01cb\" returns successfully" Jul 11 00:18:09.176665 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:18:09.200075 containerd[1439]: time="2025-07-11T00:18:09.199962821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-8rxq7,Uid:ebfb1a06-1477-48f5-805f-9808c5339795,Namespace:calico-system,Attempt:1,} returns sandbox id \"5e151982b6fc110d5b208e29e7ca8ab26794d3ece8140c205b25d2ff233cfe4a\"" Jul 11 00:18:09.207832 containerd[1439]: time="2025-07-11T00:18:09.207566892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:18:09.207832 containerd[1439]: time="2025-07-11T00:18:09.207632020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:18:09.207832 containerd[1439]: time="2025-07-11T00:18:09.207647982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:18:09.207832 containerd[1439]: time="2025-07-11T00:18:09.207735552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:18:09.242804 systemd[1]: Started cri-containerd-76c6fd8ed03fca20ddb6bec4e7251c17d6915b36798d2bd4176b130ecef066ac.scope - libcontainer container 76c6fd8ed03fca20ddb6bec4e7251c17d6915b36798d2bd4176b130ecef066ac. Jul 11 00:18:09.274061 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:18:09.300804 containerd[1439]: time="2025-07-11T00:18:09.297832425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cvpp7,Uid:1aa6bb20-6a01-4656-8af5-3bf6153d0dfe,Namespace:kube-system,Attempt:1,} returns sandbox id \"76c6fd8ed03fca20ddb6bec4e7251c17d6915b36798d2bd4176b130ecef066ac\"" Jul 11 00:18:09.301935 kubelet[2470]: E0711 00:18:09.301866 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:09.304393 containerd[1439]: time="2025-07-11T00:18:09.304351806Z" level=info msg="CreateContainer within sandbox \"76c6fd8ed03fca20ddb6bec4e7251c17d6915b36798d2bd4176b130ecef066ac\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:18:09.336845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3931935750.mount: Deactivated successfully. Jul 11 00:18:09.342493 containerd[1439]: time="2025-07-11T00:18:09.342446610Z" level=info msg="CreateContainer within sandbox \"76c6fd8ed03fca20ddb6bec4e7251c17d6915b36798d2bd4176b130ecef066ac\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"16098de1241650143dfcd87350fb0eb3b3e1652106d9de99f91df4459fb4f308\"" Jul 11 00:18:09.343158 containerd[1439]: time="2025-07-11T00:18:09.343127531Z" level=info msg="StartContainer for \"16098de1241650143dfcd87350fb0eb3b3e1652106d9de99f91df4459fb4f308\"" Jul 11 00:18:09.357850 systemd-networkd[1387]: calib03773e7d46: Gained IPv6LL Jul 11 00:18:09.382587 systemd[1]: Started cri-containerd-16098de1241650143dfcd87350fb0eb3b3e1652106d9de99f91df4459fb4f308.scope - libcontainer container 16098de1241650143dfcd87350fb0eb3b3e1652106d9de99f91df4459fb4f308. Jul 11 00:18:09.426584 containerd[1439]: time="2025-07-11T00:18:09.426339179Z" level=info msg="StartContainer for \"16098de1241650143dfcd87350fb0eb3b3e1652106d9de99f91df4459fb4f308\" returns successfully" Jul 11 00:18:09.533311 containerd[1439]: time="2025-07-11T00:18:09.533168177Z" level=info msg="StopPodSandbox for \"adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb\"" Jul 11 00:18:09.635063 containerd[1439]: 2025-07-11 00:18:09.592 [INFO][5061] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" Jul 11 00:18:09.635063 containerd[1439]: 2025-07-11 00:18:09.592 [INFO][5061] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" iface="eth0" netns="/var/run/netns/cni-e5af53b8-a883-d643-1876-3b0f7169a907" Jul 11 00:18:09.635063 containerd[1439]: 2025-07-11 00:18:09.592 [INFO][5061] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" iface="eth0" netns="/var/run/netns/cni-e5af53b8-a883-d643-1876-3b0f7169a907" Jul 11 00:18:09.635063 containerd[1439]: 2025-07-11 00:18:09.592 [INFO][5061] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" iface="eth0" netns="/var/run/netns/cni-e5af53b8-a883-d643-1876-3b0f7169a907" Jul 11 00:18:09.635063 containerd[1439]: 2025-07-11 00:18:09.593 [INFO][5061] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" Jul 11 00:18:09.635063 containerd[1439]: 2025-07-11 00:18:09.593 [INFO][5061] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" Jul 11 00:18:09.635063 containerd[1439]: 2025-07-11 00:18:09.617 [INFO][5069] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" HandleID="k8s-pod-network.adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" Workload="localhost-k8s-calico--apiserver--796d49478c--6j58z-eth0" Jul 11 00:18:09.635063 containerd[1439]: 2025-07-11 00:18:09.618 [INFO][5069] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:18:09.635063 containerd[1439]: 2025-07-11 00:18:09.618 [INFO][5069] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:18:09.635063 containerd[1439]: 2025-07-11 00:18:09.628 [WARNING][5069] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" HandleID="k8s-pod-network.adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" Workload="localhost-k8s-calico--apiserver--796d49478c--6j58z-eth0" Jul 11 00:18:09.635063 containerd[1439]: 2025-07-11 00:18:09.628 [INFO][5069] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" HandleID="k8s-pod-network.adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" Workload="localhost-k8s-calico--apiserver--796d49478c--6j58z-eth0" Jul 11 00:18:09.635063 containerd[1439]: 2025-07-11 00:18:09.629 [INFO][5069] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:18:09.635063 containerd[1439]: 2025-07-11 00:18:09.632 [INFO][5061] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" Jul 11 00:18:09.635063 containerd[1439]: time="2025-07-11T00:18:09.634914285Z" level=info msg="TearDown network for sandbox \"adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb\" successfully" Jul 11 00:18:09.635063 containerd[1439]: time="2025-07-11T00:18:09.634952050Z" level=info msg="StopPodSandbox for \"adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb\" returns successfully" Jul 11 00:18:09.636577 containerd[1439]: time="2025-07-11T00:18:09.636272968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-796d49478c-6j58z,Uid:fe2fa29d-40a7-4cfb-b752-279a23adcd32,Namespace:calico-apiserver,Attempt:1,}" Jul 11 00:18:09.735958 kubelet[2470]: E0711 00:18:09.735708 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:09.741752 kubelet[2470]: E0711 00:18:09.741711 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:09.764443 kubelet[2470]: I0711 00:18:09.762870 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cvpp7" podStartSLOduration=37.762853171 podStartE2EDuration="37.762853171s" podCreationTimestamp="2025-07-11 00:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:18:09.749671432 +0000 UTC m=+44.309390964" watchObservedRunningTime="2025-07-11 00:18:09.762853171 +0000 UTC m=+44.322572703" Jul 11 00:18:09.780548 kubelet[2470]: I0711 00:18:09.778716 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-tbbr8" podStartSLOduration=37.778700309 podStartE2EDuration="37.778700309s" podCreationTimestamp="2025-07-11 00:17:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:18:09.776608059 +0000 UTC m=+44.336327591" watchObservedRunningTime="2025-07-11 00:18:09.778700309 +0000 UTC m=+44.338419841" Jul 11 00:18:09.801730 systemd-networkd[1387]: calie0789f02fc4: Link UP Jul 11 00:18:09.802817 systemd-networkd[1387]: calie0789f02fc4: Gained carrier Jul 11 00:18:09.819074 containerd[1439]: 2025-07-11 00:18:09.700 [INFO][5076] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--796d49478c--6j58z-eth0 calico-apiserver-796d49478c- calico-apiserver fe2fa29d-40a7-4cfb-b752-279a23adcd32 1077 0 2025-07-11 00:17:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:796d49478c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-796d49478c-6j58z eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie0789f02fc4 [] [] }} ContainerID="2857f7a34cbc01b8dec99ecf2ce936ded4db1078fc8ab1bf135531d51aa16465" Namespace="calico-apiserver" Pod="calico-apiserver-796d49478c-6j58z" WorkloadEndpoint="localhost-k8s-calico--apiserver--796d49478c--6j58z-" Jul 11 00:18:09.819074 containerd[1439]: 2025-07-11 00:18:09.700 [INFO][5076] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2857f7a34cbc01b8dec99ecf2ce936ded4db1078fc8ab1bf135531d51aa16465" Namespace="calico-apiserver" Pod="calico-apiserver-796d49478c-6j58z" WorkloadEndpoint="localhost-k8s-calico--apiserver--796d49478c--6j58z-eth0" Jul 11 00:18:09.819074 containerd[1439]: 2025-07-11 00:18:09.728 [INFO][5092] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2857f7a34cbc01b8dec99ecf2ce936ded4db1078fc8ab1bf135531d51aa16465" HandleID="k8s-pod-network.2857f7a34cbc01b8dec99ecf2ce936ded4db1078fc8ab1bf135531d51aa16465" Workload="localhost-k8s-calico--apiserver--796d49478c--6j58z-eth0" Jul 11 00:18:09.819074 containerd[1439]: 2025-07-11 00:18:09.728 [INFO][5092] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2857f7a34cbc01b8dec99ecf2ce936ded4db1078fc8ab1bf135531d51aa16465" HandleID="k8s-pod-network.2857f7a34cbc01b8dec99ecf2ce936ded4db1078fc8ab1bf135531d51aa16465" Workload="localhost-k8s-calico--apiserver--796d49478c--6j58z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d610), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-796d49478c-6j58z", "timestamp":"2025-07-11 00:18:09.728673517 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:18:09.819074 containerd[1439]: 2025-07-11 00:18:09.728 [INFO][5092] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:18:09.819074 containerd[1439]: 2025-07-11 00:18:09.728 [INFO][5092] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:18:09.819074 containerd[1439]: 2025-07-11 00:18:09.729 [INFO][5092] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:18:09.819074 containerd[1439]: 2025-07-11 00:18:09.742 [INFO][5092] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2857f7a34cbc01b8dec99ecf2ce936ded4db1078fc8ab1bf135531d51aa16465" host="localhost" Jul 11 00:18:09.819074 containerd[1439]: 2025-07-11 00:18:09.751 [INFO][5092] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:18:09.819074 containerd[1439]: 2025-07-11 00:18:09.760 [INFO][5092] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:18:09.819074 containerd[1439]: 2025-07-11 00:18:09.763 [INFO][5092] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:18:09.819074 containerd[1439]: 2025-07-11 00:18:09.770 [INFO][5092] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:18:09.819074 containerd[1439]: 2025-07-11 00:18:09.770 [INFO][5092] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2857f7a34cbc01b8dec99ecf2ce936ded4db1078fc8ab1bf135531d51aa16465" host="localhost" Jul 11 00:18:09.819074 containerd[1439]: 2025-07-11 00:18:09.777 [INFO][5092] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2857f7a34cbc01b8dec99ecf2ce936ded4db1078fc8ab1bf135531d51aa16465 Jul 11 00:18:09.819074 containerd[1439]: 2025-07-11 00:18:09.782 [INFO][5092] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2857f7a34cbc01b8dec99ecf2ce936ded4db1078fc8ab1bf135531d51aa16465" host="localhost" Jul 11 00:18:09.819074 containerd[1439]: 2025-07-11 00:18:09.796 [INFO][5092] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.2857f7a34cbc01b8dec99ecf2ce936ded4db1078fc8ab1bf135531d51aa16465" host="localhost" Jul 11 00:18:09.819074 containerd[1439]: 2025-07-11 00:18:09.796 [INFO][5092] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.2857f7a34cbc01b8dec99ecf2ce936ded4db1078fc8ab1bf135531d51aa16465" host="localhost" Jul 11 00:18:09.819074 containerd[1439]: 2025-07-11 00:18:09.796 [INFO][5092] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:18:09.819074 containerd[1439]: 2025-07-11 00:18:09.796 [INFO][5092] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="2857f7a34cbc01b8dec99ecf2ce936ded4db1078fc8ab1bf135531d51aa16465" HandleID="k8s-pod-network.2857f7a34cbc01b8dec99ecf2ce936ded4db1078fc8ab1bf135531d51aa16465" Workload="localhost-k8s-calico--apiserver--796d49478c--6j58z-eth0" Jul 11 00:18:09.819715 containerd[1439]: 2025-07-11 00:18:09.799 [INFO][5076] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2857f7a34cbc01b8dec99ecf2ce936ded4db1078fc8ab1bf135531d51aa16465" Namespace="calico-apiserver" Pod="calico-apiserver-796d49478c-6j58z" WorkloadEndpoint="localhost-k8s-calico--apiserver--796d49478c--6j58z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--796d49478c--6j58z-eth0", GenerateName:"calico-apiserver-796d49478c-", Namespace:"calico-apiserver", SelfLink:"", UID:"fe2fa29d-40a7-4cfb-b752-279a23adcd32", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 17, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"796d49478c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-796d49478c-6j58z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie0789f02fc4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:18:09.819715 containerd[1439]: 2025-07-11 00:18:09.799 [INFO][5076] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="2857f7a34cbc01b8dec99ecf2ce936ded4db1078fc8ab1bf135531d51aa16465" Namespace="calico-apiserver" Pod="calico-apiserver-796d49478c-6j58z" WorkloadEndpoint="localhost-k8s-calico--apiserver--796d49478c--6j58z-eth0" Jul 11 00:18:09.819715 containerd[1439]: 2025-07-11 00:18:09.799 [INFO][5076] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie0789f02fc4 ContainerID="2857f7a34cbc01b8dec99ecf2ce936ded4db1078fc8ab1bf135531d51aa16465" Namespace="calico-apiserver" Pod="calico-apiserver-796d49478c-6j58z" WorkloadEndpoint="localhost-k8s-calico--apiserver--796d49478c--6j58z-eth0" Jul 11 00:18:09.819715 containerd[1439]: 2025-07-11 00:18:09.803 [INFO][5076] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2857f7a34cbc01b8dec99ecf2ce936ded4db1078fc8ab1bf135531d51aa16465" Namespace="calico-apiserver" Pod="calico-apiserver-796d49478c-6j58z" WorkloadEndpoint="localhost-k8s-calico--apiserver--796d49478c--6j58z-eth0" Jul 11 00:18:09.819715 containerd[1439]: 2025-07-11 00:18:09.804 [INFO][5076] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2857f7a34cbc01b8dec99ecf2ce936ded4db1078fc8ab1bf135531d51aa16465" Namespace="calico-apiserver" Pod="calico-apiserver-796d49478c-6j58z" WorkloadEndpoint="localhost-k8s-calico--apiserver--796d49478c--6j58z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--796d49478c--6j58z-eth0", GenerateName:"calico-apiserver-796d49478c-", Namespace:"calico-apiserver", SelfLink:"", UID:"fe2fa29d-40a7-4cfb-b752-279a23adcd32", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 17, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"796d49478c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2857f7a34cbc01b8dec99ecf2ce936ded4db1078fc8ab1bf135531d51aa16465", Pod:"calico-apiserver-796d49478c-6j58z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie0789f02fc4", MAC:"6e:a0:30:14:fe:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:18:09.819715 containerd[1439]: 2025-07-11 00:18:09.815 [INFO][5076] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2857f7a34cbc01b8dec99ecf2ce936ded4db1078fc8ab1bf135531d51aa16465" Namespace="calico-apiserver" Pod="calico-apiserver-796d49478c-6j58z" WorkloadEndpoint="localhost-k8s-calico--apiserver--796d49478c--6j58z-eth0" Jul 11 00:18:09.846844 containerd[1439]: time="2025-07-11T00:18:09.846748221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:18:09.846844 containerd[1439]: time="2025-07-11T00:18:09.846809468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:18:09.847031 containerd[1439]: time="2025-07-11T00:18:09.846835632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:18:09.847031 containerd[1439]: time="2025-07-11T00:18:09.846961287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:18:09.872629 systemd[1]: Started cri-containerd-2857f7a34cbc01b8dec99ecf2ce936ded4db1078fc8ab1bf135531d51aa16465.scope - libcontainer container 2857f7a34cbc01b8dec99ecf2ce936ded4db1078fc8ab1bf135531d51aa16465. Jul 11 00:18:09.885908 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:18:09.909018 containerd[1439]: time="2025-07-11T00:18:09.908935831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-796d49478c-6j58z,Uid:fe2fa29d-40a7-4cfb-b752-279a23adcd32,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2857f7a34cbc01b8dec99ecf2ce936ded4db1078fc8ab1bf135531d51aa16465\"" Jul 11 00:18:09.913151 containerd[1439]: time="2025-07-11T00:18:09.913111611Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:18:09.914319 containerd[1439]: time="2025-07-11T00:18:09.914233545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 11 00:18:09.915013 containerd[1439]: time="2025-07-11T00:18:09.914964353Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:18:09.919731 containerd[1439]: time="2025-07-11T00:18:09.919689879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:18:09.920844 containerd[1439]: time="2025-07-11T00:18:09.920819054Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 1.991058274s" Jul 11 00:18:09.920892 containerd[1439]: time="2025-07-11T00:18:09.920850138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 11 00:18:09.922157 containerd[1439]: time="2025-07-11T00:18:09.921685278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 11 00:18:09.928763 containerd[1439]: time="2025-07-11T00:18:09.928701838Z" level=info msg="CreateContainer within sandbox \"094a16a4089d195381793672ed3629856088a54c06e2085fc6826f65e86bf832\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 11 00:18:09.932613 systemd-networkd[1387]: calicb8501a0ef6: Gained IPv6LL Jul 11 00:18:09.940882 containerd[1439]: time="2025-07-11T00:18:09.940840333Z" level=info msg="CreateContainer within sandbox \"094a16a4089d195381793672ed3629856088a54c06e2085fc6826f65e86bf832\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"275b98a3f6ec3d5a37671fb767e9cb5f7b1557e19bf47269d142f182f4e4036f\"" Jul 11 00:18:09.941425 containerd[1439]: time="2025-07-11T00:18:09.941383318Z" level=info msg="StartContainer for \"275b98a3f6ec3d5a37671fb767e9cb5f7b1557e19bf47269d142f182f4e4036f\"" Jul 11 00:18:09.972617 systemd[1]: Started cri-containerd-275b98a3f6ec3d5a37671fb767e9cb5f7b1557e19bf47269d142f182f4e4036f.scope - libcontainer container 275b98a3f6ec3d5a37671fb767e9cb5f7b1557e19bf47269d142f182f4e4036f. Jul 11 00:18:10.012936 containerd[1439]: time="2025-07-11T00:18:10.012884297Z" level=info msg="StartContainer for \"275b98a3f6ec3d5a37671fb767e9cb5f7b1557e19bf47269d142f182f4e4036f\" returns successfully" Jul 11 00:18:10.162973 systemd[1]: run-netns-cni\x2de5af53b8\x2da883\x2dd643\x2d1876\x2d3b0f7169a907.mount: Deactivated successfully. Jul 11 00:18:10.188635 systemd-networkd[1387]: cali307122bfa61: Gained IPv6LL Jul 11 00:18:10.252663 systemd-networkd[1387]: calif69e3728438: Gained IPv6LL Jul 11 00:18:10.748279 kubelet[2470]: E0711 00:18:10.747262 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:10.748279 kubelet[2470]: E0711 00:18:10.747638 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:11.660643 systemd-networkd[1387]: calie0789f02fc4: Gained IPv6LL Jul 11 00:18:11.750477 kubelet[2470]: E0711 00:18:11.750444 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:11.751061 kubelet[2470]: E0711 00:18:11.751031 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:18:11.759819 containerd[1439]: time="2025-07-11T00:18:11.759761071Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:18:11.761566 containerd[1439]: time="2025-07-11T00:18:11.761210999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 11 00:18:11.761934 containerd[1439]: time="2025-07-11T00:18:11.761807627Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:18:11.787073 containerd[1439]: time="2025-07-11T00:18:11.787025217Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:18:11.787894 containerd[1439]: time="2025-07-11T00:18:11.787849792Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.86613007s" Jul 11 00:18:11.787894 containerd[1439]: time="2025-07-11T00:18:11.787891277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 11 00:18:11.789718 containerd[1439]: time="2025-07-11T00:18:11.789680323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 11 00:18:11.790254 containerd[1439]: time="2025-07-11T00:18:11.790225346Z" level=info msg="CreateContainer within sandbox \"6c832201846a023961aa43caa48dae433c70385fe09759d5487467e2d3f29984\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 00:18:11.806128 containerd[1439]: time="2025-07-11T00:18:11.806083416Z" level=info msg="CreateContainer within sandbox \"6c832201846a023961aa43caa48dae433c70385fe09759d5487467e2d3f29984\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7857b8cc29cf161511fa7af58ac414aeb99d9bcfdfa27b8c4bb980814cfed8a9\"" Jul 11 00:18:11.806642 containerd[1439]: time="2025-07-11T00:18:11.806604796Z" level=info msg="StartContainer for \"7857b8cc29cf161511fa7af58ac414aeb99d9bcfdfa27b8c4bb980814cfed8a9\"" Jul 11 00:18:11.825275 kubelet[2470]: I0711 00:18:11.825201 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7667647d94-b4l6t" podStartSLOduration=23.832846991 podStartE2EDuration="25.825160937s" podCreationTimestamp="2025-07-11 00:17:46 +0000 UTC" firstStartedPulling="2025-07-11 00:18:07.929241916 +0000 UTC m=+42.488961408" lastFinishedPulling="2025-07-11 00:18:09.921555822 +0000 UTC m=+44.481275354" observedRunningTime="2025-07-11 00:18:10.760312567 +0000 UTC m=+45.320032099" watchObservedRunningTime="2025-07-11 00:18:11.825160937 +0000 UTC m=+46.384880509" Jul 11 00:18:11.840624 systemd[1]: Started cri-containerd-7857b8cc29cf161511fa7af58ac414aeb99d9bcfdfa27b8c4bb980814cfed8a9.scope - libcontainer container 7857b8cc29cf161511fa7af58ac414aeb99d9bcfdfa27b8c4bb980814cfed8a9. Jul 11 00:18:11.870654 containerd[1439]: time="2025-07-11T00:18:11.870605220Z" level=info msg="StartContainer for \"7857b8cc29cf161511fa7af58ac414aeb99d9bcfdfa27b8c4bb980814cfed8a9\" returns successfully" Jul 11 00:18:12.725898 systemd[1]: Started sshd@8-10.0.0.77:22-10.0.0.1:53098.service - OpenSSH per-connection server daemon (10.0.0.1:53098). Jul 11 00:18:12.807481 sshd[5275]: Accepted publickey for core from 10.0.0.1 port 53098 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:18:12.809655 sshd[5275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:12.816087 systemd-logind[1419]: New session 9 of user core. Jul 11 00:18:12.823591 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 11 00:18:13.130704 sshd[5275]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:13.134520 systemd[1]: sshd@8-10.0.0.77:22-10.0.0.1:53098.service: Deactivated successfully. Jul 11 00:18:13.137087 systemd[1]: session-9.scope: Deactivated successfully. Jul 11 00:18:13.137983 systemd-logind[1419]: Session 9 logged out. Waiting for processes to exit. Jul 11 00:18:13.139377 systemd-logind[1419]: Removed session 9. Jul 11 00:18:13.761355 kubelet[2470]: I0711 00:18:13.760996 2470 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:18:14.758902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount853056752.mount: Deactivated successfully. Jul 11 00:18:15.153325 containerd[1439]: time="2025-07-11T00:18:15.153197608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:18:15.154340 containerd[1439]: time="2025-07-11T00:18:15.154174833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 11 00:18:15.155114 containerd[1439]: time="2025-07-11T00:18:15.155021605Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:18:15.157470 containerd[1439]: time="2025-07-11T00:18:15.157361778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:18:15.158627 containerd[1439]: time="2025-07-11T00:18:15.158586910Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 3.368739248s" Jul 11 00:18:15.158627 containerd[1439]: time="2025-07-11T00:18:15.158626354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 11 00:18:15.171453 containerd[1439]: time="2025-07-11T00:18:15.170379745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 11 00:18:15.171453 containerd[1439]: time="2025-07-11T00:18:15.171280882Z" level=info msg="CreateContainer within sandbox \"5e151982b6fc110d5b208e29e7ca8ab26794d3ece8140c205b25d2ff233cfe4a\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 11 00:18:15.236475 containerd[1439]: time="2025-07-11T00:18:15.236349034Z" level=info msg="CreateContainer within sandbox \"5e151982b6fc110d5b208e29e7ca8ab26794d3ece8140c205b25d2ff233cfe4a\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"0a611cc017417eafa35fa7c8ad82562c5003e88e10547bb6939086d058df61f6\"" Jul 11 00:18:15.237232 containerd[1439]: time="2025-07-11T00:18:15.237192045Z" level=info msg="StartContainer for \"0a611cc017417eafa35fa7c8ad82562c5003e88e10547bb6939086d058df61f6\"" Jul 11 00:18:15.266634 systemd[1]: Started cri-containerd-0a611cc017417eafa35fa7c8ad82562c5003e88e10547bb6939086d058df61f6.scope - libcontainer container 0a611cc017417eafa35fa7c8ad82562c5003e88e10547bb6939086d058df61f6. Jul 11 00:18:15.301510 containerd[1439]: time="2025-07-11T00:18:15.301455710Z" level=info msg="StartContainer for \"0a611cc017417eafa35fa7c8ad82562c5003e88e10547bb6939086d058df61f6\" returns successfully" Jul 11 00:18:15.459541 containerd[1439]: time="2025-07-11T00:18:15.459421222Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:18:15.461437 containerd[1439]: time="2025-07-11T00:18:15.460904022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 11 00:18:15.463247 containerd[1439]: time="2025-07-11T00:18:15.463213632Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 292.786762ms" Jul 11 00:18:15.463247 containerd[1439]: time="2025-07-11T00:18:15.463247435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 11 00:18:15.466463 containerd[1439]: time="2025-07-11T00:18:15.466429179Z" level=info msg="CreateContainer within sandbox \"2857f7a34cbc01b8dec99ecf2ce936ded4db1078fc8ab1bf135531d51aa16465\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 00:18:15.477168 containerd[1439]: time="2025-07-11T00:18:15.477123335Z" level=info msg="CreateContainer within sandbox \"2857f7a34cbc01b8dec99ecf2ce936ded4db1078fc8ab1bf135531d51aa16465\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1c2dd7cf88ccf15f62379c35edfb6bbd23332ade71ade6e219582fdff884011c\"" Jul 11 00:18:15.477710 containerd[1439]: time="2025-07-11T00:18:15.477676595Z" level=info msg="StartContainer for \"1c2dd7cf88ccf15f62379c35edfb6bbd23332ade71ade6e219582fdff884011c\"" Jul 11 00:18:15.505571 systemd[1]: Started cri-containerd-1c2dd7cf88ccf15f62379c35edfb6bbd23332ade71ade6e219582fdff884011c.scope - libcontainer container 1c2dd7cf88ccf15f62379c35edfb6bbd23332ade71ade6e219582fdff884011c. Jul 11 00:18:15.547483 containerd[1439]: time="2025-07-11T00:18:15.547441814Z" level=info msg="StartContainer for \"1c2dd7cf88ccf15f62379c35edfb6bbd23332ade71ade6e219582fdff884011c\" returns successfully" Jul 11 00:18:15.784757 kubelet[2470]: I0711 00:18:15.784619 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-796d49478c-7n5wn" podStartSLOduration=30.026553815 podStartE2EDuration="33.784594204s" podCreationTimestamp="2025-07-11 00:17:42 +0000 UTC" firstStartedPulling="2025-07-11 00:18:08.030979938 +0000 UTC m=+42.590699470" lastFinishedPulling="2025-07-11 00:18:11.789020327 +0000 UTC m=+46.348739859" observedRunningTime="2025-07-11 00:18:12.788542145 +0000 UTC m=+47.348261717" watchObservedRunningTime="2025-07-11 00:18:15.784594204 +0000 UTC m=+50.344313736" Jul 11 00:18:15.786315 kubelet[2470]: I0711 00:18:15.786004 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-8rxq7" podStartSLOduration=23.819789167 podStartE2EDuration="29.785992475s" podCreationTimestamp="2025-07-11 00:17:46 +0000 UTC" firstStartedPulling="2025-07-11 00:18:09.203410354 +0000 UTC m=+43.763129886" lastFinishedPulling="2025-07-11 00:18:15.169613662 +0000 UTC m=+49.729333194" observedRunningTime="2025-07-11 00:18:15.783638021 +0000 UTC m=+50.343357553" watchObservedRunningTime="2025-07-11 00:18:15.785992475 +0000 UTC m=+50.345712007" Jul 11 00:18:15.808302 kubelet[2470]: I0711 00:18:15.808225 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-796d49478c-6j58z" podStartSLOduration=28.254457186 podStartE2EDuration="33.808192074s" podCreationTimestamp="2025-07-11 00:17:42 +0000 UTC" firstStartedPulling="2025-07-11 00:18:09.910160817 +0000 UTC m=+44.469880349" lastFinishedPulling="2025-07-11 00:18:15.463895705 +0000 UTC m=+50.023615237" observedRunningTime="2025-07-11 00:18:15.805893226 +0000 UTC m=+50.365612758" watchObservedRunningTime="2025-07-11 00:18:15.808192074 +0000 UTC m=+50.367911606" Jul 11 00:18:16.774809 kubelet[2470]: I0711 00:18:16.774749 2470 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:18:18.142292 systemd[1]: Started sshd@9-10.0.0.77:22-10.0.0.1:53114.service - OpenSSH per-connection server daemon (10.0.0.1:53114). Jul 11 00:18:18.214041 sshd[5416]: Accepted publickey for core from 10.0.0.1 port 53114 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:18:18.215961 sshd[5416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:18.220243 systemd-logind[1419]: New session 10 of user core. Jul 11 00:18:18.229592 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 11 00:18:18.540652 sshd[5416]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:18.550192 systemd[1]: sshd@9-10.0.0.77:22-10.0.0.1:53114.service: Deactivated successfully. Jul 11 00:18:18.552745 systemd[1]: session-10.scope: Deactivated successfully. Jul 11 00:18:18.556224 systemd-logind[1419]: Session 10 logged out. Waiting for processes to exit. Jul 11 00:18:18.564806 systemd[1]: Started sshd@10-10.0.0.77:22-10.0.0.1:53116.service - OpenSSH per-connection server daemon (10.0.0.1:53116). Jul 11 00:18:18.566198 systemd-logind[1419]: Removed session 10. Jul 11 00:18:18.605329 sshd[5431]: Accepted publickey for core from 10.0.0.1 port 53116 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:18:18.606685 sshd[5431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:18.613068 systemd-logind[1419]: New session 11 of user core. Jul 11 00:18:18.619586 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 11 00:18:18.881765 sshd[5431]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:18.894614 systemd[1]: sshd@10-10.0.0.77:22-10.0.0.1:53116.service: Deactivated successfully. Jul 11 00:18:18.899770 systemd[1]: session-11.scope: Deactivated successfully. Jul 11 00:18:18.901839 systemd-logind[1419]: Session 11 logged out. Waiting for processes to exit. Jul 11 00:18:18.913771 systemd[1]: Started sshd@11-10.0.0.77:22-10.0.0.1:53120.service - OpenSSH per-connection server daemon (10.0.0.1:53120). Jul 11 00:18:18.915610 systemd-logind[1419]: Removed session 11. Jul 11 00:18:18.952135 sshd[5451]: Accepted publickey for core from 10.0.0.1 port 53120 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:18:18.953936 sshd[5451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:18.958144 systemd-logind[1419]: New session 12 of user core. Jul 11 00:18:18.972593 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 11 00:18:19.127864 sshd[5451]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:19.132908 systemd[1]: sshd@11-10.0.0.77:22-10.0.0.1:53120.service: Deactivated successfully. Jul 11 00:18:19.135871 systemd[1]: session-12.scope: Deactivated successfully. Jul 11 00:18:19.138199 systemd-logind[1419]: Session 12 logged out. Waiting for processes to exit. Jul 11 00:18:19.139169 systemd-logind[1419]: Removed session 12. Jul 11 00:18:24.138478 systemd[1]: Started sshd@12-10.0.0.77:22-10.0.0.1:43392.service - OpenSSH per-connection server daemon (10.0.0.1:43392). Jul 11 00:18:24.194291 sshd[5490]: Accepted publickey for core from 10.0.0.1 port 43392 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:18:24.195113 sshd[5490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:24.199511 systemd-logind[1419]: New session 13 of user core. Jul 11 00:18:24.211680 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 11 00:18:24.384984 sshd[5490]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:24.393269 systemd[1]: sshd@12-10.0.0.77:22-10.0.0.1:43392.service: Deactivated successfully. Jul 11 00:18:24.397584 systemd[1]: session-13.scope: Deactivated successfully. Jul 11 00:18:24.399291 systemd-logind[1419]: Session 13 logged out. Waiting for processes to exit. Jul 11 00:18:24.410736 systemd[1]: Started sshd@13-10.0.0.77:22-10.0.0.1:43400.service - OpenSSH per-connection server daemon (10.0.0.1:43400). Jul 11 00:18:24.412062 systemd-logind[1419]: Removed session 13. Jul 11 00:18:24.447732 sshd[5504]: Accepted publickey for core from 10.0.0.1 port 43400 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:18:24.449007 sshd[5504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:24.453752 systemd-logind[1419]: New session 14 of user core. Jul 11 00:18:24.465691 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 11 00:18:24.698591 sshd[5504]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:24.709764 systemd[1]: sshd@13-10.0.0.77:22-10.0.0.1:43400.service: Deactivated successfully. Jul 11 00:18:24.711940 systemd[1]: session-14.scope: Deactivated successfully. Jul 11 00:18:24.714643 systemd-logind[1419]: Session 14 logged out. Waiting for processes to exit. Jul 11 00:18:24.728008 systemd[1]: Started sshd@14-10.0.0.77:22-10.0.0.1:43416.service - OpenSSH per-connection server daemon (10.0.0.1:43416). Jul 11 00:18:24.729618 systemd-logind[1419]: Removed session 14. Jul 11 00:18:24.769216 sshd[5517]: Accepted publickey for core from 10.0.0.1 port 43416 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:18:24.770741 sshd[5517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:24.775019 systemd-logind[1419]: New session 15 of user core. Jul 11 00:18:24.786616 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 11 00:18:25.518355 containerd[1439]: time="2025-07-11T00:18:25.518313426Z" level=info msg="StopPodSandbox for \"adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb\"" Jul 11 00:18:25.591740 sshd[5517]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:25.599267 systemd[1]: sshd@14-10.0.0.77:22-10.0.0.1:43416.service: Deactivated successfully. Jul 11 00:18:25.602022 systemd[1]: session-15.scope: Deactivated successfully. Jul 11 00:18:25.604941 systemd-logind[1419]: Session 15 logged out. Waiting for processes to exit. Jul 11 00:18:25.611733 systemd[1]: Started sshd@15-10.0.0.77:22-10.0.0.1:43422.service - OpenSSH per-connection server daemon (10.0.0.1:43422). Jul 11 00:18:25.612671 systemd-logind[1419]: Removed session 15. Jul 11 00:18:25.669724 sshd[5555]: Accepted publickey for core from 10.0.0.1 port 43422 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:18:25.673924 sshd[5555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:25.676642 containerd[1439]: 2025-07-11 00:18:25.626 [WARNING][5540] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--796d49478c--6j58z-eth0", GenerateName:"calico-apiserver-796d49478c-", Namespace:"calico-apiserver", SelfLink:"", UID:"fe2fa29d-40a7-4cfb-b752-279a23adcd32", ResourceVersion:"1163", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 17, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"796d49478c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2857f7a34cbc01b8dec99ecf2ce936ded4db1078fc8ab1bf135531d51aa16465", Pod:"calico-apiserver-796d49478c-6j58z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie0789f02fc4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:18:25.676642 containerd[1439]: 2025-07-11 00:18:25.627 [INFO][5540] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" Jul 11 00:18:25.676642 containerd[1439]: 2025-07-11 00:18:25.627 [INFO][5540] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" iface="eth0" netns="" Jul 11 00:18:25.676642 containerd[1439]: 2025-07-11 00:18:25.627 [INFO][5540] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" Jul 11 00:18:25.676642 containerd[1439]: 2025-07-11 00:18:25.627 [INFO][5540] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" Jul 11 00:18:25.676642 containerd[1439]: 2025-07-11 00:18:25.660 [INFO][5559] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" HandleID="k8s-pod-network.adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" Workload="localhost-k8s-calico--apiserver--796d49478c--6j58z-eth0" Jul 11 00:18:25.676642 containerd[1439]: 2025-07-11 00:18:25.660 [INFO][5559] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:18:25.676642 containerd[1439]: 2025-07-11 00:18:25.660 [INFO][5559] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:18:25.676642 containerd[1439]: 2025-07-11 00:18:25.669 [WARNING][5559] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" HandleID="k8s-pod-network.adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" Workload="localhost-k8s-calico--apiserver--796d49478c--6j58z-eth0" Jul 11 00:18:25.676642 containerd[1439]: 2025-07-11 00:18:25.669 [INFO][5559] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" HandleID="k8s-pod-network.adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" Workload="localhost-k8s-calico--apiserver--796d49478c--6j58z-eth0" Jul 11 00:18:25.676642 containerd[1439]: 2025-07-11 00:18:25.671 [INFO][5559] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:18:25.676642 containerd[1439]: 2025-07-11 00:18:25.674 [INFO][5540] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" Jul 11 00:18:25.677008 containerd[1439]: time="2025-07-11T00:18:25.676681195Z" level=info msg="TearDown network for sandbox \"adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb\" successfully" Jul 11 00:18:25.677008 containerd[1439]: time="2025-07-11T00:18:25.676705797Z" level=info msg="StopPodSandbox for \"adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb\" returns successfully" Jul 11 00:18:25.677676 containerd[1439]: time="2025-07-11T00:18:25.677634367Z" level=info msg="RemovePodSandbox for \"adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb\"" Jul 11 00:18:25.680199 containerd[1439]: time="2025-07-11T00:18:25.680080682Z" level=info msg="Forcibly stopping sandbox \"adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb\"" Jul 11 00:18:25.680439 systemd-logind[1419]: New session 16 of user core. Jul 11 00:18:25.686619 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 11 00:18:25.754034 containerd[1439]: 2025-07-11 00:18:25.719 [WARNING][5578] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--796d49478c--6j58z-eth0", GenerateName:"calico-apiserver-796d49478c-", Namespace:"calico-apiserver", SelfLink:"", UID:"fe2fa29d-40a7-4cfb-b752-279a23adcd32", ResourceVersion:"1163", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 17, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"796d49478c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2857f7a34cbc01b8dec99ecf2ce936ded4db1078fc8ab1bf135531d51aa16465", Pod:"calico-apiserver-796d49478c-6j58z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie0789f02fc4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:18:25.754034 containerd[1439]: 2025-07-11 00:18:25.719 [INFO][5578] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" Jul 11 00:18:25.754034 containerd[1439]: 2025-07-11 00:18:25.719 [INFO][5578] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" iface="eth0" netns="" Jul 11 00:18:25.754034 containerd[1439]: 2025-07-11 00:18:25.719 [INFO][5578] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" Jul 11 00:18:25.754034 containerd[1439]: 2025-07-11 00:18:25.719 [INFO][5578] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" Jul 11 00:18:25.754034 containerd[1439]: 2025-07-11 00:18:25.738 [INFO][5587] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" HandleID="k8s-pod-network.adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" Workload="localhost-k8s-calico--apiserver--796d49478c--6j58z-eth0" Jul 11 00:18:25.754034 containerd[1439]: 2025-07-11 00:18:25.738 [INFO][5587] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:18:25.754034 containerd[1439]: 2025-07-11 00:18:25.738 [INFO][5587] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:18:25.754034 containerd[1439]: 2025-07-11 00:18:25.746 [WARNING][5587] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" HandleID="k8s-pod-network.adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" Workload="localhost-k8s-calico--apiserver--796d49478c--6j58z-eth0" Jul 11 00:18:25.754034 containerd[1439]: 2025-07-11 00:18:25.746 [INFO][5587] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" HandleID="k8s-pod-network.adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" Workload="localhost-k8s-calico--apiserver--796d49478c--6j58z-eth0" Jul 11 00:18:25.754034 containerd[1439]: 2025-07-11 00:18:25.748 [INFO][5587] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:18:25.754034 containerd[1439]: 2025-07-11 00:18:25.750 [INFO][5578] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb" Jul 11 00:18:25.754462 containerd[1439]: time="2025-07-11T00:18:25.754077367Z" level=info msg="TearDown network for sandbox \"adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb\" successfully" Jul 11 00:18:25.776861 containerd[1439]: time="2025-07-11T00:18:25.776526129Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:18:25.776861 containerd[1439]: time="2025-07-11T00:18:25.776637420Z" level=info msg="RemovePodSandbox \"adafe42c2281b1f7bd5ec06cfde096eae26eb25357771d8bb6c1f203f5bb30cb\" returns successfully" Jul 11 00:18:25.777857 containerd[1439]: time="2025-07-11T00:18:25.777509784Z" level=info msg="StopPodSandbox for \"2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32\"" Jul 11 00:18:25.859507 containerd[1439]: 2025-07-11 00:18:25.817 [WARNING][5605] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t8xxb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4bb0797a-e6b9-46c0-ab2f-d796e4b11505", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 17, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3cc76db5c7da35c9c0b117a73461563430e488923944146c3dd7ea0de404f2f1", Pod:"csi-node-driver-t8xxb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali727c42c0a08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:18:25.859507 containerd[1439]: 2025-07-11 00:18:25.817 [INFO][5605] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" Jul 11 00:18:25.859507 containerd[1439]: 2025-07-11 00:18:25.817 [INFO][5605] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" iface="eth0" netns="" Jul 11 00:18:25.859507 containerd[1439]: 2025-07-11 00:18:25.817 [INFO][5605] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" Jul 11 00:18:25.859507 containerd[1439]: 2025-07-11 00:18:25.817 [INFO][5605] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" Jul 11 00:18:25.859507 containerd[1439]: 2025-07-11 00:18:25.838 [INFO][5617] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" HandleID="k8s-pod-network.2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" Workload="localhost-k8s-csi--node--driver--t8xxb-eth0" Jul 11 00:18:25.859507 containerd[1439]: 2025-07-11 00:18:25.838 [INFO][5617] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:18:25.859507 containerd[1439]: 2025-07-11 00:18:25.838 [INFO][5617] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:18:25.859507 containerd[1439]: 2025-07-11 00:18:25.854 [WARNING][5617] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" HandleID="k8s-pod-network.2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" Workload="localhost-k8s-csi--node--driver--t8xxb-eth0" Jul 11 00:18:25.859507 containerd[1439]: 2025-07-11 00:18:25.854 [INFO][5617] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" HandleID="k8s-pod-network.2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" Workload="localhost-k8s-csi--node--driver--t8xxb-eth0" Jul 11 00:18:25.859507 containerd[1439]: 2025-07-11 00:18:25.856 [INFO][5617] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:18:25.859507 containerd[1439]: 2025-07-11 00:18:25.857 [INFO][5605] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" Jul 11 00:18:25.860323 containerd[1439]: time="2025-07-11T00:18:25.860151021Z" level=info msg="TearDown network for sandbox \"2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32\" successfully" Jul 11 00:18:25.860323 containerd[1439]: time="2025-07-11T00:18:25.860186304Z" level=info msg="StopPodSandbox for \"2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32\" returns successfully" Jul 11 00:18:25.861082 containerd[1439]: time="2025-07-11T00:18:25.860785722Z" level=info msg="RemovePodSandbox for \"2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32\"" Jul 11 00:18:25.861082 containerd[1439]: time="2025-07-11T00:18:25.860816485Z" level=info msg="Forcibly stopping sandbox \"2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32\"" Jul 11 00:18:25.944129 containerd[1439]: 2025-07-11 00:18:25.907 [WARNING][5636] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--t8xxb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4bb0797a-e6b9-46c0-ab2f-d796e4b11505", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 17, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3cc76db5c7da35c9c0b117a73461563430e488923944146c3dd7ea0de404f2f1", Pod:"csi-node-driver-t8xxb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali727c42c0a08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:18:25.944129 containerd[1439]: 2025-07-11 00:18:25.907 [INFO][5636] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" Jul 11 00:18:25.944129 containerd[1439]: 2025-07-11 00:18:25.907 [INFO][5636] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" iface="eth0" netns="" Jul 11 00:18:25.944129 containerd[1439]: 2025-07-11 00:18:25.907 [INFO][5636] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" Jul 11 00:18:25.944129 containerd[1439]: 2025-07-11 00:18:25.908 [INFO][5636] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" Jul 11 00:18:25.944129 containerd[1439]: 2025-07-11 00:18:25.929 [INFO][5647] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" HandleID="k8s-pod-network.2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" Workload="localhost-k8s-csi--node--driver--t8xxb-eth0" Jul 11 00:18:25.944129 containerd[1439]: 2025-07-11 00:18:25.929 [INFO][5647] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:18:25.944129 containerd[1439]: 2025-07-11 00:18:25.929 [INFO][5647] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:18:25.944129 containerd[1439]: 2025-07-11 00:18:25.938 [WARNING][5647] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" HandleID="k8s-pod-network.2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" Workload="localhost-k8s-csi--node--driver--t8xxb-eth0" Jul 11 00:18:25.944129 containerd[1439]: 2025-07-11 00:18:25.938 [INFO][5647] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" HandleID="k8s-pod-network.2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" Workload="localhost-k8s-csi--node--driver--t8xxb-eth0" Jul 11 00:18:25.944129 containerd[1439]: 2025-07-11 00:18:25.940 [INFO][5647] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:18:25.944129 containerd[1439]: 2025-07-11 00:18:25.942 [INFO][5636] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32" Jul 11 00:18:25.945003 containerd[1439]: time="2025-07-11T00:18:25.944516664Z" level=info msg="TearDown network for sandbox \"2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32\" successfully" Jul 11 00:18:25.965533 containerd[1439]: time="2025-07-11T00:18:25.965484803Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:18:25.966259 containerd[1439]: time="2025-07-11T00:18:25.966115104Z" level=info msg="RemovePodSandbox \"2656a1f55867b003eefab68f06be6778f8afaa64e4365f30c6e7839b18460e32\" returns successfully" Jul 11 00:18:25.966889 containerd[1439]: time="2025-07-11T00:18:25.966828492Z" level=info msg="StopPodSandbox for \"640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d\"" Jul 11 00:18:26.059984 containerd[1439]: 2025-07-11 00:18:26.016 [WARNING][5664] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--cvpp7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1aa6bb20-6a01-4656-8af5-3bf6153d0dfe", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 17, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"76c6fd8ed03fca20ddb6bec4e7251c17d6915b36798d2bd4176b130ecef066ac", Pod:"coredns-668d6bf9bc-cvpp7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif69e3728438", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:18:26.059984 containerd[1439]: 2025-07-11 00:18:26.016 [INFO][5664] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" Jul 11 00:18:26.059984 containerd[1439]: 2025-07-11 00:18:26.017 [INFO][5664] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" iface="eth0" netns="" Jul 11 00:18:26.059984 containerd[1439]: 2025-07-11 00:18:26.017 [INFO][5664] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" Jul 11 00:18:26.059984 containerd[1439]: 2025-07-11 00:18:26.017 [INFO][5664] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" Jul 11 00:18:26.059984 containerd[1439]: 2025-07-11 00:18:26.041 [INFO][5672] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" HandleID="k8s-pod-network.640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" Workload="localhost-k8s-coredns--668d6bf9bc--cvpp7-eth0" Jul 11 00:18:26.059984 containerd[1439]: 2025-07-11 00:18:26.041 [INFO][5672] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:18:26.059984 containerd[1439]: 2025-07-11 00:18:26.041 [INFO][5672] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:18:26.059984 containerd[1439]: 2025-07-11 00:18:26.049 [WARNING][5672] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" HandleID="k8s-pod-network.640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" Workload="localhost-k8s-coredns--668d6bf9bc--cvpp7-eth0" Jul 11 00:18:26.059984 containerd[1439]: 2025-07-11 00:18:26.050 [INFO][5672] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" HandleID="k8s-pod-network.640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" Workload="localhost-k8s-coredns--668d6bf9bc--cvpp7-eth0" Jul 11 00:18:26.059984 containerd[1439]: 2025-07-11 00:18:26.051 [INFO][5672] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:18:26.059984 containerd[1439]: 2025-07-11 00:18:26.058 [INFO][5664] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" Jul 11 00:18:26.059984 containerd[1439]: time="2025-07-11T00:18:26.059954011Z" level=info msg="TearDown network for sandbox \"640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d\" successfully" Jul 11 00:18:26.059984 containerd[1439]: time="2025-07-11T00:18:26.059979013Z" level=info msg="StopPodSandbox for \"640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d\" returns successfully" Jul 11 00:18:26.061217 containerd[1439]: time="2025-07-11T00:18:26.061181848Z" level=info msg="RemovePodSandbox for \"640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d\"" Jul 11 00:18:26.061288 containerd[1439]: time="2025-07-11T00:18:26.061225332Z" level=info msg="Forcibly stopping sandbox \"640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d\"" Jul 11 00:18:26.151945 containerd[1439]: 2025-07-11 00:18:26.111 [WARNING][5690] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--cvpp7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1aa6bb20-6a01-4656-8af5-3bf6153d0dfe", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 17, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"76c6fd8ed03fca20ddb6bec4e7251c17d6915b36798d2bd4176b130ecef066ac", Pod:"coredns-668d6bf9bc-cvpp7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif69e3728438", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:18:26.151945 containerd[1439]: 2025-07-11 00:18:26.111 [INFO][5690] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" Jul 11 00:18:26.151945 containerd[1439]: 2025-07-11 00:18:26.111 [INFO][5690] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" iface="eth0" netns="" Jul 11 00:18:26.151945 containerd[1439]: 2025-07-11 00:18:26.111 [INFO][5690] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" Jul 11 00:18:26.151945 containerd[1439]: 2025-07-11 00:18:26.111 [INFO][5690] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" Jul 11 00:18:26.151945 containerd[1439]: 2025-07-11 00:18:26.133 [INFO][5698] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" HandleID="k8s-pod-network.640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" Workload="localhost-k8s-coredns--668d6bf9bc--cvpp7-eth0" Jul 11 00:18:26.151945 containerd[1439]: 2025-07-11 00:18:26.134 [INFO][5698] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:18:26.151945 containerd[1439]: 2025-07-11 00:18:26.134 [INFO][5698] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:18:26.151945 containerd[1439]: 2025-07-11 00:18:26.146 [WARNING][5698] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" HandleID="k8s-pod-network.640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" Workload="localhost-k8s-coredns--668d6bf9bc--cvpp7-eth0" Jul 11 00:18:26.151945 containerd[1439]: 2025-07-11 00:18:26.147 [INFO][5698] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" HandleID="k8s-pod-network.640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" Workload="localhost-k8s-coredns--668d6bf9bc--cvpp7-eth0" Jul 11 00:18:26.151945 containerd[1439]: 2025-07-11 00:18:26.148 [INFO][5698] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:18:26.151945 containerd[1439]: 2025-07-11 00:18:26.150 [INFO][5690] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d" Jul 11 00:18:26.152364 containerd[1439]: time="2025-07-11T00:18:26.151983277Z" level=info msg="TearDown network for sandbox \"640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d\" successfully" Jul 11 00:18:26.156536 containerd[1439]: time="2025-07-11T00:18:26.156495988Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:18:26.156699 containerd[1439]: time="2025-07-11T00:18:26.156562514Z" level=info msg="RemovePodSandbox \"640349e7821725db3ca9f6ddbcf00eca2cc2689f44960e0aa3f3fd03b4604b3d\" returns successfully" Jul 11 00:18:26.156699 containerd[1439]: time="2025-07-11T00:18:26.157072443Z" level=info msg="StopPodSandbox for \"3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd\"" Jul 11 00:18:26.232137 containerd[1439]: 2025-07-11 00:18:26.194 [WARNING][5716] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" WorkloadEndpoint="localhost-k8s-whisker--8685df7cdf--pmpnr-eth0" Jul 11 00:18:26.232137 containerd[1439]: 2025-07-11 00:18:26.195 [INFO][5716] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" Jul 11 00:18:26.232137 containerd[1439]: 2025-07-11 00:18:26.195 [INFO][5716] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" iface="eth0" netns="" Jul 11 00:18:26.232137 containerd[1439]: 2025-07-11 00:18:26.195 [INFO][5716] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" Jul 11 00:18:26.232137 containerd[1439]: 2025-07-11 00:18:26.195 [INFO][5716] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" Jul 11 00:18:26.232137 containerd[1439]: 2025-07-11 00:18:26.218 [INFO][5725] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" HandleID="k8s-pod-network.3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" Workload="localhost-k8s-whisker--8685df7cdf--pmpnr-eth0" Jul 11 00:18:26.232137 containerd[1439]: 2025-07-11 00:18:26.218 [INFO][5725] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:18:26.232137 containerd[1439]: 2025-07-11 00:18:26.218 [INFO][5725] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:18:26.232137 containerd[1439]: 2025-07-11 00:18:26.227 [WARNING][5725] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" HandleID="k8s-pod-network.3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" Workload="localhost-k8s-whisker--8685df7cdf--pmpnr-eth0" Jul 11 00:18:26.232137 containerd[1439]: 2025-07-11 00:18:26.227 [INFO][5725] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" HandleID="k8s-pod-network.3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" Workload="localhost-k8s-whisker--8685df7cdf--pmpnr-eth0" Jul 11 00:18:26.232137 containerd[1439]: 2025-07-11 00:18:26.228 [INFO][5725] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:18:26.232137 containerd[1439]: 2025-07-11 00:18:26.230 [INFO][5716] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" Jul 11 00:18:26.232821 containerd[1439]: time="2025-07-11T00:18:26.232634417Z" level=info msg="TearDown network for sandbox \"3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd\" successfully" Jul 11 00:18:26.232821 containerd[1439]: time="2025-07-11T00:18:26.232664220Z" level=info msg="StopPodSandbox for \"3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd\" returns successfully" Jul 11 00:18:26.234378 containerd[1439]: time="2025-07-11T00:18:26.234332540Z" level=info msg="RemovePodSandbox for \"3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd\"" Jul 11 00:18:26.234378 containerd[1439]: time="2025-07-11T00:18:26.234368743Z" level=info msg="Forcibly stopping sandbox \"3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd\"" Jul 11 00:18:26.278925 sshd[5555]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:26.290120 systemd[1]: sshd@15-10.0.0.77:22-10.0.0.1:43422.service: Deactivated successfully. Jul 11 00:18:26.292563 systemd[1]: session-16.scope: Deactivated successfully. Jul 11 00:18:26.296337 systemd-logind[1419]: Session 16 logged out. Waiting for processes to exit. Jul 11 00:18:26.301724 systemd[1]: Started sshd@16-10.0.0.77:22-10.0.0.1:43436.service - OpenSSH per-connection server daemon (10.0.0.1:43436). Jul 11 00:18:26.303773 systemd-logind[1419]: Removed session 16. Jul 11 00:18:26.320471 containerd[1439]: 2025-07-11 00:18:26.275 [WARNING][5743] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" WorkloadEndpoint="localhost-k8s-whisker--8685df7cdf--pmpnr-eth0" Jul 11 00:18:26.320471 containerd[1439]: 2025-07-11 00:18:26.275 [INFO][5743] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" Jul 11 00:18:26.320471 containerd[1439]: 2025-07-11 00:18:26.275 [INFO][5743] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" iface="eth0" netns="" Jul 11 00:18:26.320471 containerd[1439]: 2025-07-11 00:18:26.275 [INFO][5743] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" Jul 11 00:18:26.320471 containerd[1439]: 2025-07-11 00:18:26.275 [INFO][5743] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" Jul 11 00:18:26.320471 containerd[1439]: 2025-07-11 00:18:26.301 [INFO][5752] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" HandleID="k8s-pod-network.3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" Workload="localhost-k8s-whisker--8685df7cdf--pmpnr-eth0" Jul 11 00:18:26.320471 containerd[1439]: 2025-07-11 00:18:26.301 [INFO][5752] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:18:26.320471 containerd[1439]: 2025-07-11 00:18:26.301 [INFO][5752] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:18:26.320471 containerd[1439]: 2025-07-11 00:18:26.312 [WARNING][5752] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" HandleID="k8s-pod-network.3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" Workload="localhost-k8s-whisker--8685df7cdf--pmpnr-eth0" Jul 11 00:18:26.320471 containerd[1439]: 2025-07-11 00:18:26.312 [INFO][5752] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" HandleID="k8s-pod-network.3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" Workload="localhost-k8s-whisker--8685df7cdf--pmpnr-eth0" Jul 11 00:18:26.320471 containerd[1439]: 2025-07-11 00:18:26.313 [INFO][5752] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:18:26.320471 containerd[1439]: 2025-07-11 00:18:26.315 [INFO][5743] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd" Jul 11 00:18:26.320471 containerd[1439]: time="2025-07-11T00:18:26.319625403Z" level=info msg="TearDown network for sandbox \"3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd\" successfully" Jul 11 00:18:26.323932 containerd[1439]: time="2025-07-11T00:18:26.323892010Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:18:26.324189 containerd[1439]: time="2025-07-11T00:18:26.324062066Z" level=info msg="RemovePodSandbox \"3ae27e6ec7aa368483617009a32c39f27d73cf2d4ca86676af1d65dddbaab7fd\" returns successfully" Jul 11 00:18:26.324742 containerd[1439]: time="2025-07-11T00:18:26.324716249Z" level=info msg="StopPodSandbox for \"9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb\"" Jul 11 00:18:26.352859 sshd[5761]: Accepted publickey for core from 10.0.0.1 port 43436 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:18:26.356560 sshd[5761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:26.362461 systemd-logind[1419]: New session 17 of user core. Jul 11 00:18:26.368554 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 11 00:18:26.392397 containerd[1439]: 2025-07-11 00:18:26.360 [WARNING][5774] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--796d49478c--7n5wn-eth0", GenerateName:"calico-apiserver-796d49478c-", Namespace:"calico-apiserver", SelfLink:"", UID:"6895268f-5207-4d25-89a2-65b99ac04608", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 17, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"796d49478c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6c832201846a023961aa43caa48dae433c70385fe09759d5487467e2d3f29984", Pod:"calico-apiserver-796d49478c-7n5wn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib03773e7d46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:18:26.392397 containerd[1439]: 2025-07-11 00:18:26.360 [INFO][5774] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" Jul 11 00:18:26.392397 containerd[1439]: 2025-07-11 00:18:26.361 [INFO][5774] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" iface="eth0" netns="" Jul 11 00:18:26.392397 containerd[1439]: 2025-07-11 00:18:26.361 [INFO][5774] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" Jul 11 00:18:26.392397 containerd[1439]: 2025-07-11 00:18:26.361 [INFO][5774] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" Jul 11 00:18:26.392397 containerd[1439]: 2025-07-11 00:18:26.379 [INFO][5783] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" HandleID="k8s-pod-network.9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" Workload="localhost-k8s-calico--apiserver--796d49478c--7n5wn-eth0" Jul 11 00:18:26.392397 containerd[1439]: 2025-07-11 00:18:26.379 [INFO][5783] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:18:26.392397 containerd[1439]: 2025-07-11 00:18:26.379 [INFO][5783] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:18:26.392397 containerd[1439]: 2025-07-11 00:18:26.387 [WARNING][5783] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" HandleID="k8s-pod-network.9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" Workload="localhost-k8s-calico--apiserver--796d49478c--7n5wn-eth0" Jul 11 00:18:26.392397 containerd[1439]: 2025-07-11 00:18:26.387 [INFO][5783] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" HandleID="k8s-pod-network.9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" Workload="localhost-k8s-calico--apiserver--796d49478c--7n5wn-eth0" Jul 11 00:18:26.392397 containerd[1439]: 2025-07-11 00:18:26.389 [INFO][5783] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:18:26.392397 containerd[1439]: 2025-07-11 00:18:26.390 [INFO][5774] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" Jul 11 00:18:26.392803 containerd[1439]: time="2025-07-11T00:18:26.392456116Z" level=info msg="TearDown network for sandbox \"9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb\" successfully" Jul 11 00:18:26.392803 containerd[1439]: time="2025-07-11T00:18:26.392482879Z" level=info msg="StopPodSandbox for \"9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb\" returns successfully" Jul 11 00:18:26.392992 containerd[1439]: time="2025-07-11T00:18:26.392951124Z" level=info msg="RemovePodSandbox for \"9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb\"" Jul 11 00:18:26.393029 containerd[1439]: time="2025-07-11T00:18:26.392989207Z" level=info msg="Forcibly stopping sandbox \"9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb\"" Jul 11 00:18:26.464158 containerd[1439]: 2025-07-11 00:18:26.427 [WARNING][5801] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--796d49478c--7n5wn-eth0", GenerateName:"calico-apiserver-796d49478c-", Namespace:"calico-apiserver", SelfLink:"", UID:"6895268f-5207-4d25-89a2-65b99ac04608", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 17, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"796d49478c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6c832201846a023961aa43caa48dae433c70385fe09759d5487467e2d3f29984", Pod:"calico-apiserver-796d49478c-7n5wn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib03773e7d46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:18:26.464158 containerd[1439]: 2025-07-11 00:18:26.427 [INFO][5801] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" Jul 11 00:18:26.464158 containerd[1439]: 2025-07-11 00:18:26.427 [INFO][5801] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" iface="eth0" netns="" Jul 11 00:18:26.464158 containerd[1439]: 2025-07-11 00:18:26.427 [INFO][5801] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" Jul 11 00:18:26.464158 containerd[1439]: 2025-07-11 00:18:26.427 [INFO][5801] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" Jul 11 00:18:26.464158 containerd[1439]: 2025-07-11 00:18:26.446 [INFO][5817] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" HandleID="k8s-pod-network.9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" Workload="localhost-k8s-calico--apiserver--796d49478c--7n5wn-eth0" Jul 11 00:18:26.464158 containerd[1439]: 2025-07-11 00:18:26.446 [INFO][5817] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:18:26.464158 containerd[1439]: 2025-07-11 00:18:26.446 [INFO][5817] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:18:26.464158 containerd[1439]: 2025-07-11 00:18:26.456 [WARNING][5817] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" HandleID="k8s-pod-network.9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" Workload="localhost-k8s-calico--apiserver--796d49478c--7n5wn-eth0" Jul 11 00:18:26.464158 containerd[1439]: 2025-07-11 00:18:26.456 [INFO][5817] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" HandleID="k8s-pod-network.9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" Workload="localhost-k8s-calico--apiserver--796d49478c--7n5wn-eth0" Jul 11 00:18:26.464158 containerd[1439]: 2025-07-11 00:18:26.458 [INFO][5817] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:18:26.464158 containerd[1439]: 2025-07-11 00:18:26.462 [INFO][5801] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb" Jul 11 00:18:26.464585 containerd[1439]: time="2025-07-11T00:18:26.464196086Z" level=info msg="TearDown network for sandbox \"9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb\" successfully" Jul 11 00:18:26.468460 containerd[1439]: time="2025-07-11T00:18:26.468417209Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:18:26.468540 containerd[1439]: time="2025-07-11T00:18:26.468489176Z" level=info msg="RemovePodSandbox \"9f0400f130c9061d902dd0de245af297eced0ba12dc4e8dff80be2ff7a01ccbb\" returns successfully" Jul 11 00:18:26.468942 containerd[1439]: time="2025-07-11T00:18:26.468911456Z" level=info msg="StopPodSandbox for \"1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116\"" Jul 11 00:18:26.521511 sshd[5761]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:26.525528 systemd[1]: sshd@16-10.0.0.77:22-10.0.0.1:43436.service: Deactivated successfully. Jul 11 00:18:26.527839 systemd[1]: session-17.scope: Deactivated successfully. Jul 11 00:18:26.529365 systemd-logind[1419]: Session 17 logged out. Waiting for processes to exit. Jul 11 00:18:26.533076 systemd-logind[1419]: Removed session 17. Jul 11 00:18:26.552037 containerd[1439]: 2025-07-11 00:18:26.506 [WARNING][5836] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7667647d94--b4l6t-eth0", GenerateName:"calico-kube-controllers-7667647d94-", Namespace:"calico-system", SelfLink:"", UID:"9904d86a-2797-43dd-8a39-c9306c873001", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 17, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7667647d94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"094a16a4089d195381793672ed3629856088a54c06e2085fc6826f65e86bf832", Pod:"calico-kube-controllers-7667647d94-b4l6t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2027f526111", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:18:26.552037 containerd[1439]: 2025-07-11 00:18:26.506 [INFO][5836] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" Jul 11 00:18:26.552037 containerd[1439]: 2025-07-11 00:18:26.506 [INFO][5836] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" iface="eth0" netns="" Jul 11 00:18:26.552037 containerd[1439]: 2025-07-11 00:18:26.506 [INFO][5836] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" Jul 11 00:18:26.552037 containerd[1439]: 2025-07-11 00:18:26.506 [INFO][5836] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" Jul 11 00:18:26.552037 containerd[1439]: 2025-07-11 00:18:26.528 [INFO][5845] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" HandleID="k8s-pod-network.1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" Workload="localhost-k8s-calico--kube--controllers--7667647d94--b4l6t-eth0" Jul 11 00:18:26.552037 containerd[1439]: 2025-07-11 00:18:26.528 [INFO][5845] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:18:26.552037 containerd[1439]: 2025-07-11 00:18:26.528 [INFO][5845] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:18:26.552037 containerd[1439]: 2025-07-11 00:18:26.540 [WARNING][5845] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" HandleID="k8s-pod-network.1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" Workload="localhost-k8s-calico--kube--controllers--7667647d94--b4l6t-eth0" Jul 11 00:18:26.552037 containerd[1439]: 2025-07-11 00:18:26.540 [INFO][5845] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" HandleID="k8s-pod-network.1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" Workload="localhost-k8s-calico--kube--controllers--7667647d94--b4l6t-eth0" Jul 11 00:18:26.552037 containerd[1439]: 2025-07-11 00:18:26.546 [INFO][5845] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:18:26.552037 containerd[1439]: 2025-07-11 00:18:26.549 [INFO][5836] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" Jul 11 00:18:26.552676 containerd[1439]: time="2025-07-11T00:18:26.552068115Z" level=info msg="TearDown network for sandbox \"1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116\" successfully" Jul 11 00:18:26.552676 containerd[1439]: time="2025-07-11T00:18:26.552093638Z" level=info msg="StopPodSandbox for \"1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116\" returns successfully" Jul 11 00:18:26.552676 containerd[1439]: time="2025-07-11T00:18:26.552520638Z" level=info msg="RemovePodSandbox for \"1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116\"" Jul 11 00:18:26.552676 containerd[1439]: time="2025-07-11T00:18:26.552571443Z" level=info msg="Forcibly stopping sandbox \"1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116\"" Jul 11 00:18:26.629114 containerd[1439]: 2025-07-11 00:18:26.589 [WARNING][5865] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7667647d94--b4l6t-eth0", GenerateName:"calico-kube-controllers-7667647d94-", Namespace:"calico-system", SelfLink:"", UID:"9904d86a-2797-43dd-8a39-c9306c873001", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 17, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7667647d94", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"094a16a4089d195381793672ed3629856088a54c06e2085fc6826f65e86bf832", Pod:"calico-kube-controllers-7667647d94-b4l6t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2027f526111", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:18:26.629114 containerd[1439]: 2025-07-11 00:18:26.589 [INFO][5865] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" Jul 11 00:18:26.629114 containerd[1439]: 2025-07-11 00:18:26.589 [INFO][5865] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" iface="eth0" netns="" Jul 11 00:18:26.629114 containerd[1439]: 2025-07-11 00:18:26.589 [INFO][5865] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" Jul 11 00:18:26.629114 containerd[1439]: 2025-07-11 00:18:26.589 [INFO][5865] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" Jul 11 00:18:26.629114 containerd[1439]: 2025-07-11 00:18:26.608 [INFO][5873] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" HandleID="k8s-pod-network.1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" Workload="localhost-k8s-calico--kube--controllers--7667647d94--b4l6t-eth0" Jul 11 00:18:26.629114 containerd[1439]: 2025-07-11 00:18:26.608 [INFO][5873] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:18:26.629114 containerd[1439]: 2025-07-11 00:18:26.608 [INFO][5873] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:18:26.629114 containerd[1439]: 2025-07-11 00:18:26.621 [WARNING][5873] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" HandleID="k8s-pod-network.1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" Workload="localhost-k8s-calico--kube--controllers--7667647d94--b4l6t-eth0" Jul 11 00:18:26.629114 containerd[1439]: 2025-07-11 00:18:26.621 [INFO][5873] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" HandleID="k8s-pod-network.1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" Workload="localhost-k8s-calico--kube--controllers--7667647d94--b4l6t-eth0" Jul 11 00:18:26.629114 containerd[1439]: 2025-07-11 00:18:26.623 [INFO][5873] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:18:26.629114 containerd[1439]: 2025-07-11 00:18:26.625 [INFO][5865] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116" Jul 11 00:18:26.629114 containerd[1439]: time="2025-07-11T00:18:26.627685575Z" level=info msg="TearDown network for sandbox \"1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116\" successfully" Jul 11 00:18:26.630693 containerd[1439]: time="2025-07-11T00:18:26.630645657Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:18:26.630744 containerd[1439]: time="2025-07-11T00:18:26.630719584Z" level=info msg="RemovePodSandbox \"1e4b300ac69375ec90d3c41e8b94f2030623dae5392ff30276bd3b14511e1116\" returns successfully" Jul 11 00:18:26.631308 containerd[1439]: time="2025-07-11T00:18:26.631272797Z" level=info msg="StopPodSandbox for \"197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547\"" Jul 11 00:18:26.705740 containerd[1439]: 2025-07-11 00:18:26.670 [WARNING][5890] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--8rxq7-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"ebfb1a06-1477-48f5-805f-9808c5339795", ResourceVersion:"1166", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 17, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e151982b6fc110d5b208e29e7ca8ab26794d3ece8140c205b25d2ff233cfe4a", Pod:"goldmane-768f4c5c69-8rxq7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali307122bfa61", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:18:26.705740 containerd[1439]: 2025-07-11 00:18:26.670 [INFO][5890] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" Jul 11 00:18:26.705740 containerd[1439]: 2025-07-11 00:18:26.670 [INFO][5890] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" iface="eth0" netns="" Jul 11 00:18:26.705740 containerd[1439]: 2025-07-11 00:18:26.670 [INFO][5890] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" Jul 11 00:18:26.705740 containerd[1439]: 2025-07-11 00:18:26.670 [INFO][5890] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" Jul 11 00:18:26.705740 containerd[1439]: 2025-07-11 00:18:26.689 [INFO][5899] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" HandleID="k8s-pod-network.197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" Workload="localhost-k8s-goldmane--768f4c5c69--8rxq7-eth0" Jul 11 00:18:26.705740 containerd[1439]: 2025-07-11 00:18:26.689 [INFO][5899] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:18:26.705740 containerd[1439]: 2025-07-11 00:18:26.689 [INFO][5899] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:18:26.705740 containerd[1439]: 2025-07-11 00:18:26.701 [WARNING][5899] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" HandleID="k8s-pod-network.197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" Workload="localhost-k8s-goldmane--768f4c5c69--8rxq7-eth0" Jul 11 00:18:26.705740 containerd[1439]: 2025-07-11 00:18:26.701 [INFO][5899] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" HandleID="k8s-pod-network.197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" Workload="localhost-k8s-goldmane--768f4c5c69--8rxq7-eth0" Jul 11 00:18:26.705740 containerd[1439]: 2025-07-11 00:18:26.702 [INFO][5899] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:18:26.705740 containerd[1439]: 2025-07-11 00:18:26.704 [INFO][5890] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" Jul 11 00:18:26.706295 containerd[1439]: time="2025-07-11T00:18:26.705759149Z" level=info msg="TearDown network for sandbox \"197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547\" successfully" Jul 11 00:18:26.706295 containerd[1439]: time="2025-07-11T00:18:26.705784951Z" level=info msg="StopPodSandbox for \"197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547\" returns successfully" Jul 11 00:18:26.706480 containerd[1439]: time="2025-07-11T00:18:26.706391969Z" level=info msg="RemovePodSandbox for \"197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547\"" Jul 11 00:18:26.706480 containerd[1439]: time="2025-07-11T00:18:26.706445574Z" level=info msg="Forcibly stopping sandbox \"197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547\"" Jul 11 00:18:26.782689 containerd[1439]: 2025-07-11 00:18:26.745 [WARNING][5917] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--8rxq7-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"ebfb1a06-1477-48f5-805f-9808c5339795", ResourceVersion:"1166", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 17, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e151982b6fc110d5b208e29e7ca8ab26794d3ece8140c205b25d2ff233cfe4a", Pod:"goldmane-768f4c5c69-8rxq7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali307122bfa61", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:18:26.782689 containerd[1439]: 2025-07-11 00:18:26.745 [INFO][5917] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" Jul 11 00:18:26.782689 containerd[1439]: 2025-07-11 00:18:26.745 [INFO][5917] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" iface="eth0" netns="" Jul 11 00:18:26.782689 containerd[1439]: 2025-07-11 00:18:26.746 [INFO][5917] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" Jul 11 00:18:26.782689 containerd[1439]: 2025-07-11 00:18:26.746 [INFO][5917] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" Jul 11 00:18:26.782689 containerd[1439]: 2025-07-11 00:18:26.767 [INFO][5925] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" HandleID="k8s-pod-network.197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" Workload="localhost-k8s-goldmane--768f4c5c69--8rxq7-eth0" Jul 11 00:18:26.782689 containerd[1439]: 2025-07-11 00:18:26.767 [INFO][5925] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:18:26.782689 containerd[1439]: 2025-07-11 00:18:26.767 [INFO][5925] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:18:26.782689 containerd[1439]: 2025-07-11 00:18:26.776 [WARNING][5925] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" HandleID="k8s-pod-network.197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" Workload="localhost-k8s-goldmane--768f4c5c69--8rxq7-eth0" Jul 11 00:18:26.782689 containerd[1439]: 2025-07-11 00:18:26.776 [INFO][5925] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" HandleID="k8s-pod-network.197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" Workload="localhost-k8s-goldmane--768f4c5c69--8rxq7-eth0" Jul 11 00:18:26.782689 containerd[1439]: 2025-07-11 00:18:26.778 [INFO][5925] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:18:26.782689 containerd[1439]: 2025-07-11 00:18:26.781 [INFO][5917] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547" Jul 11 00:18:26.783101 containerd[1439]: time="2025-07-11T00:18:26.782738978Z" level=info msg="TearDown network for sandbox \"197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547\" successfully" Jul 11 00:18:26.785524 containerd[1439]: time="2025-07-11T00:18:26.785490281Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:18:26.785636 containerd[1439]: time="2025-07-11T00:18:26.785558608Z" level=info msg="RemovePodSandbox \"197ad8189306396c2c29156b726a52146e04e9b8dd4c87a694d88f9478ed0547\" returns successfully" Jul 11 00:18:26.786065 containerd[1439]: time="2025-07-11T00:18:26.786028172Z" level=info msg="StopPodSandbox for \"94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7\"" Jul 11 00:18:26.864396 containerd[1439]: 2025-07-11 00:18:26.827 [WARNING][5943] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--tbbr8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e1f2b193-85e7-4131-903d-0d058505c956", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 17, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"828688e4b97fb3d5ebf98841952064e6910349fa6b4fd84ddd64ef4ce741cbd0", Pod:"coredns-668d6bf9bc-tbbr8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb8501a0ef6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:18:26.864396 containerd[1439]: 2025-07-11 00:18:26.827 [INFO][5943] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" Jul 11 00:18:26.864396 containerd[1439]: 2025-07-11 00:18:26.827 [INFO][5943] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" iface="eth0" netns="" Jul 11 00:18:26.864396 containerd[1439]: 2025-07-11 00:18:26.827 [INFO][5943] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" Jul 11 00:18:26.864396 containerd[1439]: 2025-07-11 00:18:26.827 [INFO][5943] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" Jul 11 00:18:26.864396 containerd[1439]: 2025-07-11 00:18:26.850 [INFO][5952] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" HandleID="k8s-pod-network.94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" Workload="localhost-k8s-coredns--668d6bf9bc--tbbr8-eth0" Jul 11 00:18:26.864396 containerd[1439]: 2025-07-11 00:18:26.850 [INFO][5952] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:18:26.864396 containerd[1439]: 2025-07-11 00:18:26.850 [INFO][5952] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:18:26.864396 containerd[1439]: 2025-07-11 00:18:26.859 [WARNING][5952] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" HandleID="k8s-pod-network.94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" Workload="localhost-k8s-coredns--668d6bf9bc--tbbr8-eth0" Jul 11 00:18:26.864396 containerd[1439]: 2025-07-11 00:18:26.859 [INFO][5952] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" HandleID="k8s-pod-network.94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" Workload="localhost-k8s-coredns--668d6bf9bc--tbbr8-eth0" Jul 11 00:18:26.864396 containerd[1439]: 2025-07-11 00:18:26.861 [INFO][5952] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:18:26.864396 containerd[1439]: 2025-07-11 00:18:26.862 [INFO][5943] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" Jul 11 00:18:26.864900 containerd[1439]: time="2025-07-11T00:18:26.864507825Z" level=info msg="TearDown network for sandbox \"94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7\" successfully" Jul 11 00:18:26.864900 containerd[1439]: time="2025-07-11T00:18:26.864535788Z" level=info msg="StopPodSandbox for \"94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7\" returns successfully" Jul 11 00:18:26.865042 containerd[1439]: time="2025-07-11T00:18:26.865018954Z" level=info msg="RemovePodSandbox for \"94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7\"" Jul 11 00:18:26.865077 containerd[1439]: time="2025-07-11T00:18:26.865049637Z" level=info msg="Forcibly stopping sandbox \"94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7\"" Jul 11 00:18:26.933121 containerd[1439]: 2025-07-11 00:18:26.900 [WARNING][5970] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--tbbr8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e1f2b193-85e7-4131-903d-0d058505c956", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 17, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"828688e4b97fb3d5ebf98841952064e6910349fa6b4fd84ddd64ef4ce741cbd0", Pod:"coredns-668d6bf9bc-tbbr8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb8501a0ef6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:18:26.933121 containerd[1439]: 2025-07-11 00:18:26.900 [INFO][5970] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" Jul 11 00:18:26.933121 containerd[1439]: 2025-07-11 00:18:26.900 [INFO][5970] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" iface="eth0" netns="" Jul 11 00:18:26.933121 containerd[1439]: 2025-07-11 00:18:26.900 [INFO][5970] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" Jul 11 00:18:26.933121 containerd[1439]: 2025-07-11 00:18:26.900 [INFO][5970] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" Jul 11 00:18:26.933121 containerd[1439]: 2025-07-11 00:18:26.920 [INFO][5979] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" HandleID="k8s-pod-network.94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" Workload="localhost-k8s-coredns--668d6bf9bc--tbbr8-eth0" Jul 11 00:18:26.933121 containerd[1439]: 2025-07-11 00:18:26.920 [INFO][5979] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:18:26.933121 containerd[1439]: 2025-07-11 00:18:26.920 [INFO][5979] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:18:26.933121 containerd[1439]: 2025-07-11 00:18:26.928 [WARNING][5979] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" HandleID="k8s-pod-network.94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" Workload="localhost-k8s-coredns--668d6bf9bc--tbbr8-eth0" Jul 11 00:18:26.933121 containerd[1439]: 2025-07-11 00:18:26.928 [INFO][5979] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" HandleID="k8s-pod-network.94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" Workload="localhost-k8s-coredns--668d6bf9bc--tbbr8-eth0" Jul 11 00:18:26.933121 containerd[1439]: 2025-07-11 00:18:26.929 [INFO][5979] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:18:26.933121 containerd[1439]: 2025-07-11 00:18:26.931 [INFO][5970] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7" Jul 11 00:18:26.933121 containerd[1439]: time="2025-07-11T00:18:26.933088373Z" level=info msg="TearDown network for sandbox \"94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7\" successfully" Jul 11 00:18:26.939026 containerd[1439]: time="2025-07-11T00:18:26.938974895Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:18:26.939150 containerd[1439]: time="2025-07-11T00:18:26.939068824Z" level=info msg="RemovePodSandbox \"94c269f1c36ce2251374ea77cd5e4efbbf44d2dc7eb4f1277e693ef0769388d7\" returns successfully" Jul 11 00:18:29.273216 kubelet[2470]: I0711 00:18:29.273175 2470 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:18:31.533628 systemd[1]: Started sshd@17-10.0.0.77:22-10.0.0.1:43444.service - OpenSSH per-connection server daemon (10.0.0.1:43444). Jul 11 00:18:31.580282 sshd[5994]: Accepted publickey for core from 10.0.0.1 port 43444 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:18:31.581811 sshd[5994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:31.586488 systemd-logind[1419]: New session 18 of user core. Jul 11 00:18:31.608684 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 11 00:18:31.735819 sshd[5994]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:31.742075 systemd[1]: sshd@17-10.0.0.77:22-10.0.0.1:43444.service: Deactivated successfully. Jul 11 00:18:31.744046 systemd[1]: session-18.scope: Deactivated successfully. Jul 11 00:18:31.745692 systemd-logind[1419]: Session 18 logged out. Waiting for processes to exit. Jul 11 00:18:31.746536 systemd-logind[1419]: Removed session 18. Jul 11 00:18:36.746412 systemd[1]: Started sshd@18-10.0.0.77:22-10.0.0.1:46284.service - OpenSSH per-connection server daemon (10.0.0.1:46284). Jul 11 00:18:36.790398 sshd[6011]: Accepted publickey for core from 10.0.0.1 port 46284 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:18:36.791827 sshd[6011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:36.795735 systemd-logind[1419]: New session 19 of user core. Jul 11 00:18:36.807627 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 11 00:18:36.992695 sshd[6011]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:36.996618 systemd[1]: sshd@18-10.0.0.77:22-10.0.0.1:46284.service: Deactivated successfully. Jul 11 00:18:36.998768 systemd[1]: session-19.scope: Deactivated successfully. Jul 11 00:18:36.999415 systemd-logind[1419]: Session 19 logged out. Waiting for processes to exit. Jul 11 00:18:37.000228 systemd-logind[1419]: Removed session 19. Jul 11 00:18:40.662937 kubelet[2470]: I0711 00:18:40.662882 2470 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:18:42.008300 systemd[1]: Started sshd@19-10.0.0.77:22-10.0.0.1:46298.service - OpenSSH per-connection server daemon (10.0.0.1:46298). Jul 11 00:18:42.054189 sshd[6075]: Accepted publickey for core from 10.0.0.1 port 46298 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:18:42.055768 sshd[6075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:18:42.061282 systemd-logind[1419]: New session 20 of user core. Jul 11 00:18:42.074855 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 11 00:18:42.337880 sshd[6075]: pam_unix(sshd:session): session closed for user core Jul 11 00:18:42.341092 systemd[1]: sshd@19-10.0.0.77:22-10.0.0.1:46298.service: Deactivated successfully. Jul 11 00:18:42.342731 systemd[1]: session-20.scope: Deactivated successfully. Jul 11 00:18:42.344166 systemd-logind[1419]: Session 20 logged out. Waiting for processes to exit. Jul 11 00:18:42.345304 systemd-logind[1419]: Removed session 20.