Jul 7 06:03:47.895203 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 7 06:03:47.895223 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Sun Jul 6 22:28:26 -00 2025 Jul 7 06:03:47.895239 kernel: KASLR enabled Jul 7 06:03:47.895245 kernel: efi: EFI v2.7 by EDK II Jul 7 06:03:47.895251 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 7 06:03:47.895257 kernel: random: crng init done Jul 7 06:03:47.895264 kernel: ACPI: Early table checksum verification disabled Jul 7 06:03:47.895270 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 7 06:03:47.895276 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 7 06:03:47.895284 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:03:47.895290 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:03:47.895296 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:03:47.895302 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:03:47.895308 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:03:47.895316 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:03:47.895324 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:03:47.895330 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:03:47.895337 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:03:47.895343 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 7 06:03:47.895349 kernel: NUMA: Failed to initialise from firmware Jul 7 06:03:47.895356 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 7 06:03:47.895362 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jul 7 06:03:47.895368 kernel: Zone ranges: Jul 7 06:03:47.895375 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 7 06:03:47.895381 kernel: DMA32 empty Jul 7 06:03:47.895389 kernel: Normal empty Jul 7 06:03:47.895395 kernel: Movable zone start for each node Jul 7 06:03:47.895402 kernel: Early memory node ranges Jul 7 06:03:47.895408 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 7 06:03:47.895414 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 7 06:03:47.895421 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 7 06:03:47.895427 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 7 06:03:47.895433 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 7 06:03:47.895440 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 7 06:03:47.895446 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 7 06:03:47.895452 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 7 06:03:47.895459 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 7 06:03:47.895466 kernel: psci: probing for conduit method from ACPI. Jul 7 06:03:47.895473 kernel: psci: PSCIv1.1 detected in firmware. Jul 7 06:03:47.895479 kernel: psci: Using standard PSCI v0.2 function IDs Jul 7 06:03:47.895488 kernel: psci: Trusted OS migration not required Jul 7 06:03:47.895495 kernel: psci: SMC Calling Convention v1.1 Jul 7 06:03:47.895502 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 7 06:03:47.895510 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 7 06:03:47.895517 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 7 06:03:47.895524 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 7 06:03:47.895530 kernel: Detected PIPT I-cache on CPU0 Jul 7 06:03:47.895537 kernel: CPU features: detected: GIC system register CPU interface Jul 7 06:03:47.895544 kernel: CPU features: detected: Hardware dirty bit management Jul 7 06:03:47.895551 kernel: CPU features: detected: Spectre-v4 Jul 7 06:03:47.895557 kernel: CPU features: detected: Spectre-BHB Jul 7 06:03:47.895564 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 7 06:03:47.895571 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 7 06:03:47.895579 kernel: CPU features: detected: ARM erratum 1418040 Jul 7 06:03:47.895586 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 7 06:03:47.895593 kernel: alternatives: applying boot alternatives Jul 7 06:03:47.895600 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 06:03:47.895608 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 06:03:47.895615 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 06:03:47.895621 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 06:03:47.895628 kernel: Fallback order for Node 0: 0 Jul 7 06:03:47.895635 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 7 06:03:47.895642 kernel: Policy zone: DMA Jul 7 06:03:47.895648 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 06:03:47.895656 kernel: software IO TLB: area num 4. Jul 7 06:03:47.895663 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 7 06:03:47.895670 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) Jul 7 06:03:47.895677 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 7 06:03:47.895684 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 06:03:47.895691 kernel: rcu: RCU event tracing is enabled. Jul 7 06:03:47.895698 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 7 06:03:47.895705 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 06:03:47.895712 kernel: Tracing variant of Tasks RCU enabled. Jul 7 06:03:47.895719 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 06:03:47.895726 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 7 06:03:47.895732 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 7 06:03:47.895740 kernel: GICv3: 256 SPIs implemented Jul 7 06:03:47.895747 kernel: GICv3: 0 Extended SPIs implemented Jul 7 06:03:47.895754 kernel: Root IRQ handler: gic_handle_irq Jul 7 06:03:47.895760 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 7 06:03:47.895767 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 7 06:03:47.895774 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 7 06:03:47.895781 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 7 06:03:47.895788 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 7 06:03:47.895795 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 7 06:03:47.895802 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 7 06:03:47.895809 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 06:03:47.895817 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 06:03:47.895824 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 7 06:03:47.895831 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 7 06:03:47.895838 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 7 06:03:47.895845 kernel: arm-pv: using stolen time PV Jul 7 06:03:47.895852 kernel: Console: colour dummy device 80x25 Jul 7 06:03:47.895858 kernel: ACPI: Core revision 20230628 Jul 7 06:03:47.895866 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 7 06:03:47.895873 kernel: pid_max: default: 32768 minimum: 301 Jul 7 06:03:47.895880 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 7 06:03:47.895897 kernel: landlock: Up and running. Jul 7 06:03:47.895904 kernel: SELinux: Initializing. Jul 7 06:03:47.895911 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 06:03:47.895918 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 06:03:47.895925 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 06:03:47.895932 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 06:03:47.895939 kernel: rcu: Hierarchical SRCU implementation. Jul 7 06:03:47.895946 kernel: rcu: Max phase no-delay instances is 400. Jul 7 06:03:47.895953 kernel: Platform MSI: ITS@0x8080000 domain created Jul 7 06:03:47.895961 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 7 06:03:47.895968 kernel: Remapping and enabling EFI services. Jul 7 06:03:47.895975 kernel: smp: Bringing up secondary CPUs ... Jul 7 06:03:47.895982 kernel: Detected PIPT I-cache on CPU1 Jul 7 06:03:47.895989 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 7 06:03:47.895996 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 7 06:03:47.896003 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 06:03:47.896010 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 7 06:03:47.896017 kernel: Detected PIPT I-cache on CPU2 Jul 7 06:03:47.896024 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 7 06:03:47.896033 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 7 06:03:47.896040 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 06:03:47.896051 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 7 06:03:47.896060 kernel: Detected PIPT I-cache on CPU3 Jul 7 06:03:47.896067 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 7 06:03:47.896074 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 7 06:03:47.896082 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 06:03:47.896089 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 7 06:03:47.896096 kernel: smp: Brought up 1 node, 4 CPUs Jul 7 06:03:47.896105 kernel: SMP: Total of 4 processors activated. Jul 7 06:03:47.896112 kernel: CPU features: detected: 32-bit EL0 Support Jul 7 06:03:47.896119 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 7 06:03:47.896127 kernel: CPU features: detected: Common not Private translations Jul 7 06:03:47.896134 kernel: CPU features: detected: CRC32 instructions Jul 7 06:03:47.896141 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 7 06:03:47.896149 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 7 06:03:47.896156 kernel: CPU features: detected: LSE atomic instructions Jul 7 06:03:47.896170 kernel: CPU features: detected: Privileged Access Never Jul 7 06:03:47.896177 kernel: CPU features: detected: RAS Extension Support Jul 7 06:03:47.896185 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 7 06:03:47.896192 kernel: CPU: All CPU(s) started at EL1 Jul 7 06:03:47.896199 kernel: alternatives: applying system-wide alternatives Jul 7 06:03:47.896206 kernel: devtmpfs: initialized Jul 7 06:03:47.896214 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 06:03:47.896221 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 7 06:03:47.896231 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 06:03:47.896241 kernel: SMBIOS 3.0.0 present. Jul 7 06:03:47.896248 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 7 06:03:47.896256 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 06:03:47.896263 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 7 06:03:47.896270 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 7 06:03:47.896278 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 7 06:03:47.896285 kernel: audit: initializing netlink subsys (disabled) Jul 7 06:03:47.896292 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Jul 7 06:03:47.896300 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 06:03:47.896309 kernel: cpuidle: using governor menu Jul 7 06:03:47.896316 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 7 06:03:47.896323 kernel: ASID allocator initialised with 32768 entries Jul 7 06:03:47.896330 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 06:03:47.896338 kernel: Serial: AMBA PL011 UART driver Jul 7 06:03:47.896345 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 7 06:03:47.896352 kernel: Modules: 0 pages in range for non-PLT usage Jul 7 06:03:47.896360 kernel: Modules: 509008 pages in range for PLT usage Jul 7 06:03:47.896367 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 06:03:47.896376 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 06:03:47.896383 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 7 06:03:47.896390 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 7 06:03:47.896397 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 06:03:47.896405 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 06:03:47.896412 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 7 06:03:47.896419 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 7 06:03:47.896426 kernel: ACPI: Added _OSI(Module Device) Jul 7 06:03:47.896434 kernel: ACPI: Added _OSI(Processor Device) Jul 7 06:03:47.896442 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 06:03:47.896449 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 06:03:47.896457 kernel: ACPI: Interpreter enabled Jul 7 06:03:47.896464 kernel: ACPI: Using GIC for interrupt routing Jul 7 06:03:47.896471 kernel: ACPI: MCFG table detected, 1 entries Jul 7 06:03:47.896478 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 7 06:03:47.896486 kernel: printk: console [ttyAMA0] enabled Jul 7 06:03:47.896493 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 06:03:47.896627 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 06:03:47.896702 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 7 06:03:47.896766 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 7 06:03:47.896830 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 7 06:03:47.896961 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 7 06:03:47.896974 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 7 06:03:47.896982 kernel: PCI host bridge to bus 0000:00 Jul 7 06:03:47.897059 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 7 06:03:47.897123 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 7 06:03:47.897189 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 7 06:03:47.897252 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 06:03:47.897330 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 7 06:03:47.897405 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 7 06:03:47.897474 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 7 06:03:47.897545 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 7 06:03:47.897613 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 7 06:03:47.897682 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 7 06:03:47.897748 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 7 06:03:47.897814 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 7 06:03:47.897873 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 7 06:03:47.897942 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 7 06:03:47.898005 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 7 06:03:47.898015 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 7 06:03:47.898023 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 7 06:03:47.898030 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 7 06:03:47.898038 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 7 06:03:47.898045 kernel: iommu: Default domain type: Translated Jul 7 06:03:47.898052 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 7 06:03:47.898060 kernel: efivars: Registered efivars operations Jul 7 06:03:47.898067 kernel: vgaarb: loaded Jul 7 06:03:47.898076 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 7 06:03:47.898083 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 06:03:47.898091 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 06:03:47.898098 kernel: pnp: PnP ACPI init Jul 7 06:03:47.898180 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 7 06:03:47.898192 kernel: pnp: PnP ACPI: found 1 devices Jul 7 06:03:47.898199 kernel: NET: Registered PF_INET protocol family Jul 7 06:03:47.898207 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 06:03:47.898216 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 06:03:47.898224 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 06:03:47.898232 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 06:03:47.898239 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 06:03:47.898247 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 06:03:47.898254 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 06:03:47.898261 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 06:03:47.898269 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 06:03:47.898276 kernel: PCI: CLS 0 bytes, default 64 Jul 7 06:03:47.898285 kernel: kvm [1]: HYP mode not available Jul 7 06:03:47.898292 kernel: Initialise system trusted keyrings Jul 7 06:03:47.898300 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 06:03:47.898307 kernel: Key type asymmetric registered Jul 7 06:03:47.898314 kernel: Asymmetric key parser 'x509' registered Jul 7 06:03:47.898322 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 06:03:47.898329 kernel: io scheduler mq-deadline registered Jul 7 06:03:47.898337 kernel: io scheduler kyber registered Jul 7 06:03:47.898344 kernel: io scheduler bfq registered Jul 7 06:03:47.898354 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 7 06:03:47.898361 kernel: ACPI: button: Power Button [PWRB] Jul 7 06:03:47.898369 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 7 06:03:47.898452 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 7 06:03:47.898466 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 06:03:47.898475 kernel: thunder_xcv, ver 1.0 Jul 7 06:03:47.898484 kernel: thunder_bgx, ver 1.0 Jul 7 06:03:47.898493 kernel: nicpf, ver 1.0 Jul 7 06:03:47.898502 kernel: nicvf, ver 1.0 Jul 7 06:03:47.898584 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 7 06:03:47.898647 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-07T06:03:47 UTC (1751868227) Jul 7 06:03:47.898657 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 06:03:47.898665 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 7 06:03:47.898673 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 7 06:03:47.898680 kernel: watchdog: Hard watchdog permanently disabled Jul 7 06:03:47.898688 kernel: NET: Registered PF_INET6 protocol family Jul 7 06:03:47.898695 kernel: Segment Routing with IPv6 Jul 7 06:03:47.898704 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 06:03:47.898712 kernel: NET: Registered PF_PACKET protocol family Jul 7 06:03:47.898719 kernel: Key type dns_resolver registered Jul 7 06:03:47.898726 kernel: registered taskstats version 1 Jul 7 06:03:47.898734 kernel: Loading compiled-in X.509 certificates Jul 7 06:03:47.898742 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 238b9dc1e5bb098e9decff566778e6505241ab94' Jul 7 06:03:47.898749 kernel: Key type .fscrypt registered Jul 7 06:03:47.898756 kernel: Key type fscrypt-provisioning registered Jul 7 06:03:47.898764 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 06:03:47.898773 kernel: ima: Allocated hash algorithm: sha1 Jul 7 06:03:47.898780 kernel: ima: No architecture policies found Jul 7 06:03:47.898787 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 7 06:03:47.898795 kernel: clk: Disabling unused clocks Jul 7 06:03:47.898802 kernel: Freeing unused kernel memory: 39424K Jul 7 06:03:47.898809 kernel: Run /init as init process Jul 7 06:03:47.898816 kernel: with arguments: Jul 7 06:03:47.898824 kernel: /init Jul 7 06:03:47.898831 kernel: with environment: Jul 7 06:03:47.898839 kernel: HOME=/ Jul 7 06:03:47.898846 kernel: TERM=linux Jul 7 06:03:47.898854 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 06:03:47.898863 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 06:03:47.898872 systemd[1]: Detected virtualization kvm. Jul 7 06:03:47.898880 systemd[1]: Detected architecture arm64. Jul 7 06:03:47.898898 systemd[1]: Running in initrd. Jul 7 06:03:47.898906 systemd[1]: No hostname configured, using default hostname. Jul 7 06:03:47.898915 systemd[1]: Hostname set to . Jul 7 06:03:47.898923 systemd[1]: Initializing machine ID from VM UUID. Jul 7 06:03:47.898931 systemd[1]: Queued start job for default target initrd.target. Jul 7 06:03:47.898939 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:03:47.898947 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:03:47.898955 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 06:03:47.898963 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:03:47.898972 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 06:03:47.898980 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 06:03:47.898990 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 06:03:47.898998 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 06:03:47.899006 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:03:47.899014 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:03:47.899022 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:03:47.899031 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:03:47.899039 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:03:47.899047 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:03:47.899055 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:03:47.899062 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:03:47.899070 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 06:03:47.899078 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 06:03:47.899087 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:03:47.899094 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:03:47.899104 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:03:47.899112 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:03:47.899120 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 06:03:47.899128 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:03:47.899135 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 06:03:47.899143 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 06:03:47.899151 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:03:47.899159 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:03:47.899174 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:03:47.899182 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 06:03:47.899190 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:03:47.899197 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 06:03:47.899206 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:03:47.899238 systemd-journald[237]: Collecting audit messages is disabled. Jul 7 06:03:47.899258 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:03:47.899266 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:03:47.899274 systemd-journald[237]: Journal started Jul 7 06:03:47.899295 systemd-journald[237]: Runtime Journal (/run/log/journal/e2a883e2298549fb8643e1502585dec8) is 5.9M, max 47.3M, 41.4M free. Jul 7 06:03:47.890483 systemd-modules-load[239]: Inserted module 'overlay' Jul 7 06:03:47.902921 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:03:47.902948 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 06:03:47.905908 kernel: Bridge firewalling registered Jul 7 06:03:47.905943 systemd-modules-load[239]: Inserted module 'br_netfilter' Jul 7 06:03:47.925065 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:03:47.926815 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:03:47.928924 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:03:47.930601 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:03:47.934541 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:03:47.941505 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:03:47.943607 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:03:47.946323 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:03:47.948492 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:03:47.960113 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 06:03:47.962410 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:03:47.970125 dracut-cmdline[278]: dracut-dracut-053 Jul 7 06:03:47.972611 dracut-cmdline[278]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 06:03:47.994296 systemd-resolved[281]: Positive Trust Anchors: Jul 7 06:03:47.994311 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:03:47.994342 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:03:47.998975 systemd-resolved[281]: Defaulting to hostname 'linux'. Jul 7 06:03:48.003138 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:03:48.004281 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:03:48.037922 kernel: SCSI subsystem initialized Jul 7 06:03:48.042905 kernel: Loading iSCSI transport class v2.0-870. Jul 7 06:03:48.050920 kernel: iscsi: registered transport (tcp) Jul 7 06:03:48.063912 kernel: iscsi: registered transport (qla4xxx) Jul 7 06:03:48.063931 kernel: QLogic iSCSI HBA Driver Jul 7 06:03:48.106698 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 06:03:48.117019 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 06:03:48.134924 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 06:03:48.134981 kernel: device-mapper: uevent: version 1.0.3 Jul 7 06:03:48.135011 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 7 06:03:48.182936 kernel: raid6: neonx8 gen() 15688 MB/s Jul 7 06:03:48.199919 kernel: raid6: neonx4 gen() 15583 MB/s Jul 7 06:03:48.216916 kernel: raid6: neonx2 gen() 13207 MB/s Jul 7 06:03:48.233941 kernel: raid6: neonx1 gen() 10456 MB/s Jul 7 06:03:48.250944 kernel: raid6: int64x8 gen() 6925 MB/s Jul 7 06:03:48.267942 kernel: raid6: int64x4 gen() 7315 MB/s Jul 7 06:03:48.284929 kernel: raid6: int64x2 gen() 6104 MB/s Jul 7 06:03:48.302027 kernel: raid6: int64x1 gen() 5033 MB/s Jul 7 06:03:48.302063 kernel: raid6: using algorithm neonx8 gen() 15688 MB/s Jul 7 06:03:48.320040 kernel: raid6: .... xor() 11901 MB/s, rmw enabled Jul 7 06:03:48.320069 kernel: raid6: using neon recovery algorithm Jul 7 06:03:48.326471 kernel: xor: measuring software checksum speed Jul 7 06:03:48.326505 kernel: 8regs : 17932 MB/sec Jul 7 06:03:48.326515 kernel: 32regs : 19299 MB/sec Jul 7 06:03:48.327126 kernel: arm64_neon : 26954 MB/sec Jul 7 06:03:48.327151 kernel: xor: using function: arm64_neon (26954 MB/sec) Jul 7 06:03:48.379946 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 06:03:48.390828 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:03:48.402030 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:03:48.413598 systemd-udevd[464]: Using default interface naming scheme 'v255'. Jul 7 06:03:48.416682 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:03:48.431066 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 06:03:48.442142 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Jul 7 06:03:48.467208 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:03:48.476095 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:03:48.515060 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:03:48.522040 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 06:03:48.535727 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 06:03:48.538019 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:03:48.540245 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:03:48.542788 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:03:48.551046 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 06:03:48.554014 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 7 06:03:48.556864 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 7 06:03:48.561325 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:03:48.565338 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 06:03:48.565358 kernel: GPT:9289727 != 19775487 Jul 7 06:03:48.565368 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 06:03:48.565378 kernel: GPT:9289727 != 19775487 Jul 7 06:03:48.565386 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 06:03:48.565396 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:03:48.568932 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 06:03:48.569042 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:03:48.571153 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:03:48.572734 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:03:48.572874 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:03:48.577273 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:03:48.584923 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (521) Jul 7 06:03:48.587046 kernel: BTRFS: device fsid 8b9ce65a-b4d6-4744-987c-133e7f159d2d devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (525) Jul 7 06:03:48.589137 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:03:48.599915 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:03:48.604987 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 7 06:03:48.613531 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 7 06:03:48.618155 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 06:03:48.622004 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 7 06:03:48.623186 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 7 06:03:48.642088 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 06:03:48.643861 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:03:48.649654 disk-uuid[553]: Primary Header is updated. Jul 7 06:03:48.649654 disk-uuid[553]: Secondary Entries is updated. Jul 7 06:03:48.649654 disk-uuid[553]: Secondary Header is updated. Jul 7 06:03:48.653917 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:03:48.665797 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:03:49.664051 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:03:49.664101 disk-uuid[554]: The operation has completed successfully. Jul 7 06:03:49.685509 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 06:03:49.685607 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 06:03:49.709094 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 06:03:49.712315 sh[574]: Success Jul 7 06:03:49.725911 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 7 06:03:49.754466 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 06:03:49.767367 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 06:03:49.770389 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 06:03:49.780016 kernel: BTRFS info (device dm-0): first mount of filesystem 8b9ce65a-b4d6-4744-987c-133e7f159d2d Jul 7 06:03:49.780066 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 7 06:03:49.780077 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 7 06:03:49.782445 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 7 06:03:49.782464 kernel: BTRFS info (device dm-0): using free space tree Jul 7 06:03:49.791083 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 06:03:49.792380 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 06:03:49.808102 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 06:03:49.809669 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 06:03:49.817059 kernel: BTRFS info (device vda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:03:49.817096 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 06:03:49.817106 kernel: BTRFS info (device vda6): using free space tree Jul 7 06:03:49.820014 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 06:03:49.826165 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 7 06:03:49.828007 kernel: BTRFS info (device vda6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:03:49.834036 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 06:03:49.841058 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 06:03:49.900724 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:03:49.910182 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:03:49.934834 systemd-networkd[766]: lo: Link UP Jul 7 06:03:49.934847 systemd-networkd[766]: lo: Gained carrier Jul 7 06:03:49.935548 systemd-networkd[766]: Enumeration completed Jul 7 06:03:49.935631 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:03:49.936757 systemd[1]: Reached target network.target - Network. Jul 7 06:03:49.937857 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:03:49.937860 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:03:49.938711 systemd-networkd[766]: eth0: Link UP Jul 7 06:03:49.938714 systemd-networkd[766]: eth0: Gained carrier Jul 7 06:03:49.938721 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:03:49.954915 ignition[667]: Ignition 2.19.0 Jul 7 06:03:49.954924 ignition[667]: Stage: fetch-offline Jul 7 06:03:49.954942 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.84/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 06:03:49.954961 ignition[667]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:03:49.954969 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:03:49.955117 ignition[667]: parsed url from cmdline: "" Jul 7 06:03:49.955120 ignition[667]: no config URL provided Jul 7 06:03:49.955124 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 06:03:49.955131 ignition[667]: no config at "/usr/lib/ignition/user.ign" Jul 7 06:03:49.955161 ignition[667]: op(1): [started] loading QEMU firmware config module Jul 7 06:03:49.955167 ignition[667]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 7 06:03:49.963929 ignition[667]: op(1): [finished] loading QEMU firmware config module Jul 7 06:03:49.963954 ignition[667]: QEMU firmware config was not found. Ignoring... Jul 7 06:03:50.002678 ignition[667]: parsing config with SHA512: 8ad81a81771e4c4c6649f3a04652b614cf3181b3dc775cb8c4dacdf4c51d70bf48121130c5623119b75466508dfb577ec3722c3c951de0bb9af6a1c390b4c961 Jul 7 06:03:50.008720 unknown[667]: fetched base config from "system" Jul 7 06:03:50.008739 unknown[667]: fetched user config from "qemu" Jul 7 06:03:50.009241 ignition[667]: fetch-offline: fetch-offline passed Jul 7 06:03:50.010922 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:03:50.009310 ignition[667]: Ignition finished successfully Jul 7 06:03:50.012422 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 7 06:03:50.019049 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 06:03:50.029083 ignition[772]: Ignition 2.19.0 Jul 7 06:03:50.029092 ignition[772]: Stage: kargs Jul 7 06:03:50.029258 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:03:50.029268 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:03:50.032319 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 06:03:50.030169 ignition[772]: kargs: kargs passed Jul 7 06:03:50.030215 ignition[772]: Ignition finished successfully Jul 7 06:03:50.034647 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 06:03:50.047468 ignition[780]: Ignition 2.19.0 Jul 7 06:03:50.047484 ignition[780]: Stage: disks Jul 7 06:03:50.047641 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:03:50.047651 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:03:50.048522 ignition[780]: disks: disks passed Jul 7 06:03:50.050215 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 06:03:50.048567 ignition[780]: Ignition finished successfully Jul 7 06:03:50.051768 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 06:03:50.053291 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 06:03:50.055367 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:03:50.057180 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:03:50.059096 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:03:50.076111 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 06:03:50.085686 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 7 06:03:50.089299 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 06:03:50.098040 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 06:03:50.153921 kernel: EXT4-fs (vda9): mounted filesystem bea371b7-1069-4e98-84b2-bf5b94f934f3 r/w with ordered data mode. Quota mode: none. Jul 7 06:03:50.154399 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 06:03:50.155647 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 06:03:50.172989 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:03:50.175298 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 06:03:50.176282 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 06:03:50.176321 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 06:03:50.176342 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:03:50.183519 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 06:03:50.187017 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 06:03:50.189796 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (798) Jul 7 06:03:50.192771 kernel: BTRFS info (device vda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:03:50.192808 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 06:03:50.192825 kernel: BTRFS info (device vda6): using free space tree Jul 7 06:03:50.197918 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 06:03:50.199112 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:03:50.238396 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 06:03:50.243221 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Jul 7 06:03:50.246333 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 06:03:50.249475 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 06:03:50.320806 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 06:03:50.341077 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 06:03:50.343515 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 06:03:50.348920 kernel: BTRFS info (device vda6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:03:50.368533 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 06:03:50.370500 ignition[912]: INFO : Ignition 2.19.0 Jul 7 06:03:50.370500 ignition[912]: INFO : Stage: mount Jul 7 06:03:50.370500 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:03:50.370500 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:03:50.373963 ignition[912]: INFO : mount: mount passed Jul 7 06:03:50.373963 ignition[912]: INFO : Ignition finished successfully Jul 7 06:03:50.372936 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 06:03:50.385019 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 06:03:50.778675 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 06:03:50.788430 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:03:50.795923 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (925) Jul 7 06:03:50.795959 kernel: BTRFS info (device vda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:03:50.798062 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 06:03:50.798905 kernel: BTRFS info (device vda6): using free space tree Jul 7 06:03:50.800904 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 06:03:50.802184 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:03:50.818980 ignition[942]: INFO : Ignition 2.19.0 Jul 7 06:03:50.818980 ignition[942]: INFO : Stage: files Jul 7 06:03:50.820706 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:03:50.820706 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:03:50.820706 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Jul 7 06:03:50.824218 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 06:03:50.824218 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 06:03:50.827054 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 06:03:50.827054 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 06:03:50.827054 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 06:03:50.826626 unknown[942]: wrote ssh authorized keys file for user: core Jul 7 06:03:50.832352 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 7 06:03:50.832352 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 7 06:03:50.832352 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 7 06:03:50.832352 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 7 06:03:50.882470 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 7 06:03:51.026013 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 7 06:03:51.028011 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 7 06:03:51.028011 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 06:03:51.028011 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:03:51.028011 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:03:51.028011 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:03:51.028011 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:03:51.028011 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:03:51.028011 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:03:51.041791 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:03:51.041791 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:03:51.041791 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 06:03:51.041791 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 06:03:51.041791 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 06:03:51.041791 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 7 06:03:51.472372 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 7 06:03:51.712509 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 06:03:51.712509 ignition[942]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 7 06:03:51.716094 ignition[942]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 7 06:03:51.716094 ignition[942]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 7 06:03:51.716094 ignition[942]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 7 06:03:51.716094 ignition[942]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 7 06:03:51.716094 ignition[942]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:03:51.716094 ignition[942]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:03:51.716094 ignition[942]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 7 06:03:51.716094 ignition[942]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jul 7 06:03:51.716094 ignition[942]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 06:03:51.716094 ignition[942]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 06:03:51.716094 ignition[942]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jul 7 06:03:51.716094 ignition[942]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jul 7 06:03:51.737444 ignition[942]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 06:03:51.740741 ignition[942]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 06:03:51.744976 ignition[942]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jul 7 06:03:51.744976 ignition[942]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jul 7 06:03:51.744976 ignition[942]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 06:03:51.744976 ignition[942]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:03:51.744976 ignition[942]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:03:51.744976 ignition[942]: INFO : files: files passed Jul 7 06:03:51.744976 ignition[942]: INFO : Ignition finished successfully Jul 7 06:03:51.743704 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 06:03:51.752064 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 06:03:51.754766 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 06:03:51.756109 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 06:03:51.757800 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 06:03:51.762217 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Jul 7 06:03:51.763940 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:03:51.763940 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:03:51.766926 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:03:51.766365 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:03:51.768523 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 06:03:51.783042 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 06:03:51.802751 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 06:03:51.802860 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 06:03:51.805036 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 06:03:51.806897 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 06:03:51.808795 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 06:03:51.809572 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 06:03:51.824821 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:03:51.834042 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 06:03:51.841464 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:03:51.842765 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:03:51.844795 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 06:03:51.846552 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 06:03:51.846673 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:03:51.849074 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 06:03:51.851003 systemd[1]: Stopped target basic.target - Basic System. Jul 7 06:03:51.852622 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 06:03:51.854276 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:03:51.856118 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 06:03:51.858010 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 06:03:51.859837 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:03:51.861821 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 06:03:51.863812 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 06:03:51.865839 systemd[1]: Stopped target swap.target - Swaps. Jul 7 06:03:51.867397 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 06:03:51.867522 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:03:51.869737 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:03:51.870872 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:03:51.872797 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 06:03:51.873946 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:03:51.875910 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 06:03:51.876025 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 06:03:51.879109 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 06:03:51.879235 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:03:51.881107 systemd[1]: Stopped target paths.target - Path Units. Jul 7 06:03:51.882673 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 06:03:51.882778 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:03:51.884686 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 06:03:51.886464 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 06:03:51.887958 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 06:03:51.888092 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:03:51.889721 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 06:03:51.889806 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:03:51.891898 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 06:03:51.892008 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:03:51.893698 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 06:03:51.893797 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 06:03:51.905044 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 06:03:51.906573 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 06:03:51.907591 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 06:03:51.907709 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:03:51.909632 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 06:03:51.909732 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:03:51.914705 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 06:03:51.914799 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 06:03:51.919811 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 06:03:51.922286 ignition[997]: INFO : Ignition 2.19.0 Jul 7 06:03:51.922286 ignition[997]: INFO : Stage: umount Jul 7 06:03:51.922286 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:03:51.922286 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:03:51.922286 ignition[997]: INFO : umount: umount passed Jul 7 06:03:51.922286 ignition[997]: INFO : Ignition finished successfully Jul 7 06:03:51.924796 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 06:03:51.924908 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 06:03:51.928082 systemd[1]: Stopped target network.target - Network. Jul 7 06:03:51.929690 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 06:03:51.929750 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 06:03:51.931576 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 06:03:51.931619 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 06:03:51.933620 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 06:03:51.933667 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 06:03:51.935234 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 06:03:51.935276 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 06:03:51.937108 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 06:03:51.938827 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 06:03:51.940745 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 06:03:51.940836 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 06:03:51.942613 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 06:03:51.942696 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 06:03:51.944929 systemd-networkd[766]: eth0: DHCPv6 lease lost Jul 7 06:03:51.945096 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 06:03:51.945204 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 06:03:51.947658 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 06:03:51.947762 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 06:03:51.952002 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 06:03:51.952051 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:03:51.961990 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 06:03:51.962845 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 06:03:51.962919 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:03:51.964980 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 06:03:51.965021 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:03:51.966895 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 06:03:51.966946 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 06:03:51.968745 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 06:03:51.968789 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:03:51.970819 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:03:51.982273 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 06:03:51.983923 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 06:03:51.991519 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 06:03:51.991645 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:03:51.993810 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 06:03:51.993845 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 06:03:51.995596 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 06:03:51.995625 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:03:51.997448 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 06:03:51.997490 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:03:52.000039 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 06:03:52.000080 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 06:03:52.002654 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 06:03:52.002698 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:03:52.013029 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 06:03:52.014042 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 06:03:52.014096 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:03:52.016119 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 7 06:03:52.016172 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:03:52.018075 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 06:03:52.018115 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:03:52.020172 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:03:52.020214 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:03:52.022334 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 06:03:52.022424 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 06:03:52.024697 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 06:03:52.026749 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 06:03:52.036110 systemd[1]: Switching root. Jul 7 06:03:52.066757 systemd-journald[237]: Journal stopped Jul 7 06:03:52.790663 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jul 7 06:03:52.790725 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 06:03:52.790739 kernel: SELinux: policy capability open_perms=1 Jul 7 06:03:52.790748 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 06:03:52.790765 kernel: SELinux: policy capability always_check_network=0 Jul 7 06:03:52.790775 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 06:03:52.790785 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 06:03:52.790795 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 06:03:52.790806 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 06:03:52.790817 kernel: audit: type=1403 audit(1751868232.244:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 06:03:52.790827 systemd[1]: Successfully loaded SELinux policy in 32.508ms. Jul 7 06:03:52.790844 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.680ms. Jul 7 06:03:52.790857 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 06:03:52.790868 systemd[1]: Detected virtualization kvm. Jul 7 06:03:52.790879 systemd[1]: Detected architecture arm64. Jul 7 06:03:52.790908 systemd[1]: Detected first boot. Jul 7 06:03:52.790920 systemd[1]: Initializing machine ID from VM UUID. Jul 7 06:03:52.790931 zram_generator::config[1063]: No configuration found. Jul 7 06:03:52.790942 systemd[1]: Populated /etc with preset unit settings. Jul 7 06:03:52.790953 systemd[1]: Queued start job for default target multi-user.target. Jul 7 06:03:52.790963 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 7 06:03:52.790977 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 06:03:52.790988 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 06:03:52.790999 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 06:03:52.791009 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 06:03:52.791020 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 06:03:52.791031 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 06:03:52.791041 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 06:03:52.791052 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 06:03:52.791065 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:03:52.791077 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:03:52.791087 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 06:03:52.791098 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 06:03:52.791108 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 06:03:52.791120 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:03:52.791130 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 7 06:03:52.791141 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:03:52.791159 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 06:03:52.791173 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:03:52.791184 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:03:52.791195 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:03:52.791205 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:03:52.791216 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 06:03:52.791230 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 06:03:52.791241 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 06:03:52.791252 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 06:03:52.791264 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:03:52.791274 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:03:52.791285 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:03:52.791295 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 06:03:52.791307 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 06:03:52.791318 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 06:03:52.791328 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 06:03:52.791339 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 06:03:52.791350 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 06:03:52.791362 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 06:03:52.791373 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 06:03:52.791383 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:03:52.791394 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:03:52.791404 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 06:03:52.791415 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:03:52.791425 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:03:52.791436 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:03:52.791446 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 06:03:52.791458 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:03:52.791469 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 06:03:52.791479 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 7 06:03:52.791490 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 7 06:03:52.791501 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:03:52.791511 kernel: fuse: init (API version 7.39) Jul 7 06:03:52.791520 kernel: loop: module loaded Jul 7 06:03:52.791531 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:03:52.791542 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:03:52.791554 kernel: ACPI: bus type drm_connector registered Jul 7 06:03:52.791564 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 06:03:52.791574 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:03:52.791603 systemd-journald[1148]: Collecting audit messages is disabled. Jul 7 06:03:52.791624 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 06:03:52.791635 systemd-journald[1148]: Journal started Jul 7 06:03:52.791658 systemd-journald[1148]: Runtime Journal (/run/log/journal/e2a883e2298549fb8643e1502585dec8) is 5.9M, max 47.3M, 41.4M free. Jul 7 06:03:52.793913 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:03:52.794830 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 06:03:52.796052 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 06:03:52.797143 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 06:03:52.798307 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 06:03:52.799538 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 06:03:52.800795 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 06:03:52.802268 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:03:52.803745 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 06:03:52.803935 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 06:03:52.805322 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:03:52.805483 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:03:52.806866 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:03:52.807045 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:03:52.808405 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:03:52.808563 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:03:52.810037 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 06:03:52.810209 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 06:03:52.811773 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:03:52.811991 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:03:52.813535 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:03:52.814980 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:03:52.816680 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 06:03:52.828401 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:03:52.842027 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 06:03:52.844139 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 06:03:52.845289 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 06:03:52.846809 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 06:03:52.848828 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 06:03:52.850078 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:03:52.854062 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 06:03:52.855360 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:03:52.857983 systemd-journald[1148]: Time spent on flushing to /var/log/journal/e2a883e2298549fb8643e1502585dec8 is 17.363ms for 842 entries. Jul 7 06:03:52.857983 systemd-journald[1148]: System Journal (/var/log/journal/e2a883e2298549fb8643e1502585dec8) is 8.0M, max 195.6M, 187.6M free. Jul 7 06:03:52.888766 systemd-journald[1148]: Received client request to flush runtime journal. Jul 7 06:03:52.859063 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:03:52.861315 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:03:52.864032 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:03:52.867991 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 06:03:52.869376 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 06:03:52.870847 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 06:03:52.874470 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 06:03:52.884208 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 7 06:03:52.893062 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jul 7 06:03:52.893409 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jul 7 06:03:52.894258 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 06:03:52.895946 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:03:52.897882 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:03:52.909189 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 06:03:52.910643 udevadm[1202]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 7 06:03:52.933425 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 06:03:52.943113 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:03:52.954297 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Jul 7 06:03:52.954313 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Jul 7 06:03:52.958255 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:03:53.274801 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 06:03:53.287130 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:03:53.305978 systemd-udevd[1221]: Using default interface naming scheme 'v255'. Jul 7 06:03:53.318734 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:03:53.334036 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:03:53.339455 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 06:03:53.354225 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jul 7 06:03:53.362990 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1233) Jul 7 06:03:53.398813 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 06:03:53.407553 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 06:03:53.456395 systemd-networkd[1228]: lo: Link UP Jul 7 06:03:53.456409 systemd-networkd[1228]: lo: Gained carrier Jul 7 06:03:53.457089 systemd-networkd[1228]: Enumeration completed Jul 7 06:03:53.457529 systemd-networkd[1228]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:03:53.457533 systemd-networkd[1228]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:03:53.458151 systemd-networkd[1228]: eth0: Link UP Jul 7 06:03:53.458155 systemd-networkd[1228]: eth0: Gained carrier Jul 7 06:03:53.458167 systemd-networkd[1228]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:03:53.460101 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:03:53.461464 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:03:53.464367 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 06:03:53.468230 systemd-networkd[1228]: eth0: DHCPv4 address 10.0.0.84/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 06:03:53.472953 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 7 06:03:53.480020 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 7 06:03:53.492014 lvm[1260]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 06:03:53.495401 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:03:53.521480 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 7 06:03:53.523054 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:03:53.538075 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 7 06:03:53.543246 lvm[1267]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 06:03:53.582810 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 7 06:03:53.584665 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 06:03:53.586227 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 06:03:53.586264 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:03:53.587600 systemd[1]: Reached target machines.target - Containers. Jul 7 06:03:53.591934 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 7 06:03:53.606082 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 06:03:53.609845 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 06:03:53.611808 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:03:53.612912 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 06:03:53.616746 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 7 06:03:53.621240 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 06:03:53.623319 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 06:03:53.626054 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 06:03:53.633957 kernel: loop0: detected capacity change from 0 to 114328 Jul 7 06:03:53.637468 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 06:03:53.639771 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 7 06:03:53.649283 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 06:03:53.681054 kernel: loop1: detected capacity change from 0 to 114432 Jul 7 06:03:53.723919 kernel: loop2: detected capacity change from 0 to 203944 Jul 7 06:03:53.763910 kernel: loop3: detected capacity change from 0 to 114328 Jul 7 06:03:53.768906 kernel: loop4: detected capacity change from 0 to 114432 Jul 7 06:03:53.773922 kernel: loop5: detected capacity change from 0 to 203944 Jul 7 06:03:53.778461 (sd-merge)[1288]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 7 06:03:53.778851 (sd-merge)[1288]: Merged extensions into '/usr'. Jul 7 06:03:53.790693 systemd[1]: Reloading requested from client PID 1275 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 06:03:53.790710 systemd[1]: Reloading... Jul 7 06:03:53.832916 zram_generator::config[1316]: No configuration found. Jul 7 06:03:53.871943 ldconfig[1272]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 06:03:53.933495 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:03:53.978192 systemd[1]: Reloading finished in 187 ms. Jul 7 06:03:53.992771 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 06:03:53.994257 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 06:03:54.009079 systemd[1]: Starting ensure-sysext.service... Jul 7 06:03:54.010847 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:03:54.015725 systemd[1]: Reloading requested from client PID 1357 ('systemctl') (unit ensure-sysext.service)... Jul 7 06:03:54.015740 systemd[1]: Reloading... Jul 7 06:03:54.026734 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 06:03:54.027027 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 06:03:54.027662 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 06:03:54.027878 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Jul 7 06:03:54.027947 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Jul 7 06:03:54.030026 systemd-tmpfiles[1358]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:03:54.030040 systemd-tmpfiles[1358]: Skipping /boot Jul 7 06:03:54.036705 systemd-tmpfiles[1358]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:03:54.036721 systemd-tmpfiles[1358]: Skipping /boot Jul 7 06:03:54.058032 zram_generator::config[1388]: No configuration found. Jul 7 06:03:54.145846 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:03:54.190093 systemd[1]: Reloading finished in 174 ms. Jul 7 06:03:54.206786 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:03:54.224281 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 06:03:54.226819 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 06:03:54.232169 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 06:03:54.235936 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:03:54.241341 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 06:03:54.244586 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:03:54.249126 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:03:54.253228 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:03:54.256214 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:03:54.259056 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:03:54.261203 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:03:54.261360 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:03:54.264080 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:03:54.264270 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:03:54.268804 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:03:54.272192 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:03:54.275120 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 06:03:54.281371 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:03:54.290218 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:03:54.295158 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:03:54.298231 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:03:54.301035 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:03:54.303559 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 06:03:54.305221 augenrules[1466]: No rules Jul 7 06:03:54.307090 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 06:03:54.308731 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 06:03:54.309426 systemd-resolved[1434]: Positive Trust Anchors: Jul 7 06:03:54.309443 systemd-resolved[1434]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:03:54.309475 systemd-resolved[1434]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:03:54.310926 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 06:03:54.312636 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:03:54.312776 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:03:54.314619 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:03:54.314760 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:03:54.316326 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:03:54.316509 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:03:54.318405 systemd-resolved[1434]: Defaulting to hostname 'linux'. Jul 7 06:03:54.321806 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 06:03:54.323408 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:03:54.329291 systemd[1]: Reached target network.target - Network. Jul 7 06:03:54.330420 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:03:54.331814 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:03:54.345149 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:03:54.347256 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:03:54.349240 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:03:54.353181 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:03:54.354276 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:03:54.354408 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 06:03:54.355267 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:03:54.355411 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:03:54.357054 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:03:54.357201 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:03:54.358670 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:03:54.358807 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:03:54.360385 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:03:54.360582 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:03:54.363715 systemd[1]: Finished ensure-sysext.service. Jul 7 06:03:54.367936 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:03:54.368000 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:03:54.379020 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 06:03:54.422962 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 06:03:54.423700 systemd-timesyncd[1501]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 7 06:03:54.423754 systemd-timesyncd[1501]: Initial clock synchronization to Mon 2025-07-07 06:03:54.643974 UTC. Jul 7 06:03:54.424553 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:03:54.425703 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 06:03:54.426937 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 06:03:54.428130 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 06:03:54.429357 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 06:03:54.429392 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:03:54.430289 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 06:03:54.431428 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 06:03:54.432574 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 06:03:54.433783 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:03:54.435203 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 06:03:54.437761 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 06:03:54.439855 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 06:03:54.445913 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 06:03:54.446960 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:03:54.447930 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:03:54.449000 systemd[1]: System is tainted: cgroupsv1 Jul 7 06:03:54.449050 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:03:54.449069 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:03:54.450187 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 06:03:54.452207 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 06:03:54.454137 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 06:03:54.458608 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 06:03:54.461160 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 06:03:54.463049 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 06:03:54.467382 jq[1507]: false Jul 7 06:03:54.470326 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 06:03:54.472554 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 06:03:54.478009 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 06:03:54.484495 extend-filesystems[1509]: Found loop3 Jul 7 06:03:54.487231 extend-filesystems[1509]: Found loop4 Jul 7 06:03:54.487231 extend-filesystems[1509]: Found loop5 Jul 7 06:03:54.487231 extend-filesystems[1509]: Found vda Jul 7 06:03:54.487231 extend-filesystems[1509]: Found vda1 Jul 7 06:03:54.487231 extend-filesystems[1509]: Found vda2 Jul 7 06:03:54.487231 extend-filesystems[1509]: Found vda3 Jul 7 06:03:54.487231 extend-filesystems[1509]: Found usr Jul 7 06:03:54.487231 extend-filesystems[1509]: Found vda4 Jul 7 06:03:54.487231 extend-filesystems[1509]: Found vda6 Jul 7 06:03:54.487231 extend-filesystems[1509]: Found vda7 Jul 7 06:03:54.487231 extend-filesystems[1509]: Found vda9 Jul 7 06:03:54.487231 extend-filesystems[1509]: Checking size of /dev/vda9 Jul 7 06:03:54.487064 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 06:03:54.493135 dbus-daemon[1506]: [system] SELinux support is enabled Jul 7 06:03:54.496474 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 06:03:54.500614 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 06:03:54.503523 extend-filesystems[1509]: Resized partition /dev/vda9 Jul 7 06:03:54.510291 extend-filesystems[1533]: resize2fs 1.47.1 (20-May-2024) Jul 7 06:03:54.511652 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 06:03:54.511795 jq[1534]: true Jul 7 06:03:54.513446 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 06:03:54.517102 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 7 06:03:54.527654 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 06:03:54.527950 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 06:03:54.528305 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 06:03:54.528507 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 06:03:54.533359 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 06:03:54.533595 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 06:03:54.533901 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1233) Jul 7 06:03:54.545929 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 7 06:03:54.548971 (ntainerd)[1541]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 06:03:54.555797 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 06:03:54.567573 jq[1540]: true Jul 7 06:03:54.555830 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 06:03:54.577390 extend-filesystems[1533]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 7 06:03:54.577390 extend-filesystems[1533]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 7 06:03:54.577390 extend-filesystems[1533]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 7 06:03:54.557824 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 06:03:54.584155 extend-filesystems[1509]: Resized filesystem in /dev/vda9 Jul 7 06:03:54.557842 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 06:03:54.569449 systemd-logind[1522]: Watching system buttons on /dev/input/event0 (Power Button) Jul 7 06:03:54.570846 systemd-logind[1522]: New seat seat0. Jul 7 06:03:54.577372 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 06:03:54.582275 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 06:03:54.582504 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 06:03:54.591377 tar[1538]: linux-arm64/helm Jul 7 06:03:54.605908 update_engine[1530]: I20250707 06:03:54.605294 1530 main.cc:92] Flatcar Update Engine starting Jul 7 06:03:54.608372 systemd[1]: Started update-engine.service - Update Engine. Jul 7 06:03:54.609551 update_engine[1530]: I20250707 06:03:54.608418 1530 update_check_scheduler.cc:74] Next update check in 11m19s Jul 7 06:03:54.610986 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 06:03:54.621229 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 06:03:54.638913 bash[1570]: Updated "/home/core/.ssh/authorized_keys" Jul 7 06:03:54.640241 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 06:03:54.644204 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 7 06:03:54.694322 locksmithd[1569]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 06:03:54.758044 systemd-networkd[1228]: eth0: Gained IPv6LL Jul 7 06:03:54.763610 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 06:03:54.765476 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 06:03:54.780258 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 7 06:03:54.784178 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:03:54.790373 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 06:03:54.797925 containerd[1541]: time="2025-07-07T06:03:54.795098800Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 7 06:03:54.823696 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 7 06:03:54.823949 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 7 06:03:54.825546 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 06:03:54.831839 containerd[1541]: time="2025-07-07T06:03:54.831622720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 7 06:03:54.832423 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 06:03:54.834663 containerd[1541]: time="2025-07-07T06:03:54.833346600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:03:54.834663 containerd[1541]: time="2025-07-07T06:03:54.833373400Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 7 06:03:54.834663 containerd[1541]: time="2025-07-07T06:03:54.833388640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 7 06:03:54.834663 containerd[1541]: time="2025-07-07T06:03:54.833597960Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 7 06:03:54.834663 containerd[1541]: time="2025-07-07T06:03:54.833618360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 7 06:03:54.834663 containerd[1541]: time="2025-07-07T06:03:54.833672960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:03:54.834663 containerd[1541]: time="2025-07-07T06:03:54.833686080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 7 06:03:54.834663 containerd[1541]: time="2025-07-07T06:03:54.833870320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:03:54.834663 containerd[1541]: time="2025-07-07T06:03:54.833904680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 7 06:03:54.834663 containerd[1541]: time="2025-07-07T06:03:54.833920400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:03:54.834663 containerd[1541]: time="2025-07-07T06:03:54.833931000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 7 06:03:54.834941 containerd[1541]: time="2025-07-07T06:03:54.834008000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 7 06:03:54.834941 containerd[1541]: time="2025-07-07T06:03:54.834198240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 7 06:03:54.834941 containerd[1541]: time="2025-07-07T06:03:54.834336840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:03:54.834941 containerd[1541]: time="2025-07-07T06:03:54.834349920Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 7 06:03:54.834941 containerd[1541]: time="2025-07-07T06:03:54.834421840Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 7 06:03:54.834941 containerd[1541]: time="2025-07-07T06:03:54.834459680Z" level=info msg="metadata content store policy set" policy=shared Jul 7 06:03:54.837981 containerd[1541]: time="2025-07-07T06:03:54.837955320Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 7 06:03:54.838548 containerd[1541]: time="2025-07-07T06:03:54.838195120Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 7 06:03:54.838548 containerd[1541]: time="2025-07-07T06:03:54.838230640Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 7 06:03:54.838548 containerd[1541]: time="2025-07-07T06:03:54.838246920Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 7 06:03:54.838548 containerd[1541]: time="2025-07-07T06:03:54.838260840Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 7 06:03:54.838548 containerd[1541]: time="2025-07-07T06:03:54.838396280Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 7 06:03:54.839719 containerd[1541]: time="2025-07-07T06:03:54.839685040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 7 06:03:54.840800 containerd[1541]: time="2025-07-07T06:03:54.839933080Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 7 06:03:54.840800 containerd[1541]: time="2025-07-07T06:03:54.839957400Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 7 06:03:54.840800 containerd[1541]: time="2025-07-07T06:03:54.839970600Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 7 06:03:54.840800 containerd[1541]: time="2025-07-07T06:03:54.839985440Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 7 06:03:54.840800 containerd[1541]: time="2025-07-07T06:03:54.840005520Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 7 06:03:54.840800 containerd[1541]: time="2025-07-07T06:03:54.840017600Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 7 06:03:54.840800 containerd[1541]: time="2025-07-07T06:03:54.840032720Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 7 06:03:54.840800 containerd[1541]: time="2025-07-07T06:03:54.840047480Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 7 06:03:54.840800 containerd[1541]: time="2025-07-07T06:03:54.840060800Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 7 06:03:54.840800 containerd[1541]: time="2025-07-07T06:03:54.840073800Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 7 06:03:54.840800 containerd[1541]: time="2025-07-07T06:03:54.840085760Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 7 06:03:54.840800 containerd[1541]: time="2025-07-07T06:03:54.840105720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 7 06:03:54.840800 containerd[1541]: time="2025-07-07T06:03:54.840118720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 7 06:03:54.840800 containerd[1541]: time="2025-07-07T06:03:54.840131040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 7 06:03:54.841098 containerd[1541]: time="2025-07-07T06:03:54.840151880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 7 06:03:54.841098 containerd[1541]: time="2025-07-07T06:03:54.840165280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 7 06:03:54.841098 containerd[1541]: time="2025-07-07T06:03:54.840178320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 7 06:03:54.841098 containerd[1541]: time="2025-07-07T06:03:54.840195080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 7 06:03:54.841098 containerd[1541]: time="2025-07-07T06:03:54.840208000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 7 06:03:54.841098 containerd[1541]: time="2025-07-07T06:03:54.840220840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 7 06:03:54.841098 containerd[1541]: time="2025-07-07T06:03:54.840235120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 7 06:03:54.841098 containerd[1541]: time="2025-07-07T06:03:54.840246680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 7 06:03:54.841098 containerd[1541]: time="2025-07-07T06:03:54.840258720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 7 06:03:54.841098 containerd[1541]: time="2025-07-07T06:03:54.840271120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 7 06:03:54.841098 containerd[1541]: time="2025-07-07T06:03:54.840286880Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 7 06:03:54.841098 containerd[1541]: time="2025-07-07T06:03:54.840308200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 7 06:03:54.841098 containerd[1541]: time="2025-07-07T06:03:54.840319760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 7 06:03:54.841098 containerd[1541]: time="2025-07-07T06:03:54.840329400Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 7 06:03:54.841373 containerd[1541]: time="2025-07-07T06:03:54.840430200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 7 06:03:54.841373 containerd[1541]: time="2025-07-07T06:03:54.840446120Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 7 06:03:54.841373 containerd[1541]: time="2025-07-07T06:03:54.840457280Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 7 06:03:54.841373 containerd[1541]: time="2025-07-07T06:03:54.840469400Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 7 06:03:54.841373 containerd[1541]: time="2025-07-07T06:03:54.840479200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 7 06:03:54.841373 containerd[1541]: time="2025-07-07T06:03:54.840491320Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 7 06:03:54.841373 containerd[1541]: time="2025-07-07T06:03:54.840501320Z" level=info msg="NRI interface is disabled by configuration." Jul 7 06:03:54.841373 containerd[1541]: time="2025-07-07T06:03:54.840513720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 7 06:03:54.843853 containerd[1541]: time="2025-07-07T06:03:54.843712560Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 7 06:03:54.844363 containerd[1541]: time="2025-07-07T06:03:54.844317240Z" level=info msg="Connect containerd service" Jul 7 06:03:54.844469 containerd[1541]: time="2025-07-07T06:03:54.844455160Z" level=info msg="using legacy CRI server" Jul 7 06:03:54.844557 containerd[1541]: time="2025-07-07T06:03:54.844541520Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 06:03:54.844734 containerd[1541]: time="2025-07-07T06:03:54.844711960Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 7 06:03:54.845545 containerd[1541]: time="2025-07-07T06:03:54.845512280Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:03:54.845840 containerd[1541]: time="2025-07-07T06:03:54.845756200Z" level=info msg="Start subscribing containerd event" Jul 7 06:03:54.845840 containerd[1541]: time="2025-07-07T06:03:54.845819040Z" level=info msg="Start recovering state" Jul 7 06:03:54.845922 containerd[1541]: time="2025-07-07T06:03:54.845897920Z" level=info msg="Start event monitor" Jul 7 06:03:54.845922 containerd[1541]: time="2025-07-07T06:03:54.845910280Z" level=info msg="Start snapshots syncer" Jul 7 06:03:54.845922 containerd[1541]: time="2025-07-07T06:03:54.845919920Z" level=info msg="Start cni network conf syncer for default" Jul 7 06:03:54.845988 containerd[1541]: time="2025-07-07T06:03:54.845927640Z" level=info msg="Start streaming server" Jul 7 06:03:54.846250 containerd[1541]: time="2025-07-07T06:03:54.846226320Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 06:03:54.846362 containerd[1541]: time="2025-07-07T06:03:54.846348000Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 06:03:54.846459 containerd[1541]: time="2025-07-07T06:03:54.846446600Z" level=info msg="containerd successfully booted in 0.055072s" Jul 7 06:03:54.846533 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 06:03:54.954272 tar[1538]: linux-arm64/LICENSE Jul 7 06:03:54.954441 tar[1538]: linux-arm64/README.md Jul 7 06:03:54.963569 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 06:03:55.385809 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:03:55.389866 (kubelet)[1624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:03:55.764695 sshd_keygen[1528]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 06:03:55.786085 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 06:03:55.801219 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 06:03:55.807685 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 06:03:55.808108 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 06:03:55.811394 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 06:03:55.829558 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 06:03:55.832811 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 06:03:55.835215 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 7 06:03:55.836846 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 06:03:55.838259 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 06:03:55.839534 systemd[1]: Startup finished in 5.102s (kernel) + 3.627s (userspace) = 8.729s. Jul 7 06:03:55.843379 kubelet[1624]: E0707 06:03:55.843328 1624 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:03:55.847067 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:03:55.847226 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:04:01.007276 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 06:04:01.021113 systemd[1]: Started sshd@0-10.0.0.84:22-10.0.0.1:44144.service - OpenSSH per-connection server daemon (10.0.0.1:44144). Jul 7 06:04:01.069684 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 44144 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:04:01.071249 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:04:01.089628 systemd-logind[1522]: New session 1 of user core. Jul 7 06:04:01.090543 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 06:04:01.103127 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 06:04:01.112725 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 06:04:01.115746 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 06:04:01.122599 (systemd)[1662]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 06:04:01.195100 systemd[1662]: Queued start job for default target default.target. Jul 7 06:04:01.195463 systemd[1662]: Created slice app.slice - User Application Slice. Jul 7 06:04:01.195487 systemd[1662]: Reached target paths.target - Paths. Jul 7 06:04:01.195498 systemd[1662]: Reached target timers.target - Timers. Jul 7 06:04:01.204993 systemd[1662]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 06:04:01.211311 systemd[1662]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 06:04:01.211381 systemd[1662]: Reached target sockets.target - Sockets. Jul 7 06:04:01.211393 systemd[1662]: Reached target basic.target - Basic System. Jul 7 06:04:01.211437 systemd[1662]: Reached target default.target - Main User Target. Jul 7 06:04:01.211460 systemd[1662]: Startup finished in 84ms. Jul 7 06:04:01.211921 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 06:04:01.213376 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 06:04:01.276326 systemd[1]: Started sshd@1-10.0.0.84:22-10.0.0.1:44160.service - OpenSSH per-connection server daemon (10.0.0.1:44160). Jul 7 06:04:01.311862 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 44160 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:04:01.313118 sshd[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:04:01.317424 systemd-logind[1522]: New session 2 of user core. Jul 7 06:04:01.324220 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 06:04:01.377067 sshd[1674]: pam_unix(sshd:session): session closed for user core Jul 7 06:04:01.389124 systemd[1]: Started sshd@2-10.0.0.84:22-10.0.0.1:44162.service - OpenSSH per-connection server daemon (10.0.0.1:44162). Jul 7 06:04:01.389497 systemd[1]: sshd@1-10.0.0.84:22-10.0.0.1:44160.service: Deactivated successfully. Jul 7 06:04:01.391181 systemd-logind[1522]: Session 2 logged out. Waiting for processes to exit. Jul 7 06:04:01.391860 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 06:04:01.393036 systemd-logind[1522]: Removed session 2. Jul 7 06:04:01.423176 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 44162 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:04:01.424314 sshd[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:04:01.428221 systemd-logind[1522]: New session 3 of user core. Jul 7 06:04:01.437197 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 06:04:01.485401 sshd[1679]: pam_unix(sshd:session): session closed for user core Jul 7 06:04:01.498136 systemd[1]: Started sshd@3-10.0.0.84:22-10.0.0.1:44166.service - OpenSSH per-connection server daemon (10.0.0.1:44166). Jul 7 06:04:01.498489 systemd[1]: sshd@2-10.0.0.84:22-10.0.0.1:44162.service: Deactivated successfully. Jul 7 06:04:01.500951 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 06:04:01.501098 systemd-logind[1522]: Session 3 logged out. Waiting for processes to exit. Jul 7 06:04:01.502227 systemd-logind[1522]: Removed session 3. Jul 7 06:04:01.531749 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 44166 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:04:01.532851 sshd[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:04:01.536271 systemd-logind[1522]: New session 4 of user core. Jul 7 06:04:01.550125 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 06:04:01.601357 sshd[1687]: pam_unix(sshd:session): session closed for user core Jul 7 06:04:01.615209 systemd[1]: Started sshd@4-10.0.0.84:22-10.0.0.1:44178.service - OpenSSH per-connection server daemon (10.0.0.1:44178). Jul 7 06:04:01.615626 systemd[1]: sshd@3-10.0.0.84:22-10.0.0.1:44166.service: Deactivated successfully. Jul 7 06:04:01.617436 systemd-logind[1522]: Session 4 logged out. Waiting for processes to exit. Jul 7 06:04:01.617513 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 06:04:01.619108 systemd-logind[1522]: Removed session 4. Jul 7 06:04:01.648909 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 44178 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:04:01.650062 sshd[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:04:01.654070 systemd-logind[1522]: New session 5 of user core. Jul 7 06:04:01.667150 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 06:04:01.733717 sudo[1702]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 06:04:01.734018 sudo[1702]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:04:01.754756 sudo[1702]: pam_unix(sudo:session): session closed for user root Jul 7 06:04:01.756570 sshd[1695]: pam_unix(sshd:session): session closed for user core Jul 7 06:04:01.770243 systemd[1]: Started sshd@5-10.0.0.84:22-10.0.0.1:44190.service - OpenSSH per-connection server daemon (10.0.0.1:44190). Jul 7 06:04:01.770600 systemd[1]: sshd@4-10.0.0.84:22-10.0.0.1:44178.service: Deactivated successfully. Jul 7 06:04:01.773039 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 06:04:01.773238 systemd-logind[1522]: Session 5 logged out. Waiting for processes to exit. Jul 7 06:04:01.774268 systemd-logind[1522]: Removed session 5. Jul 7 06:04:01.804052 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 44190 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:04:01.805508 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:04:01.808910 systemd-logind[1522]: New session 6 of user core. Jul 7 06:04:01.818111 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 06:04:01.868597 sudo[1712]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 06:04:01.868876 sudo[1712]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:04:01.871880 sudo[1712]: pam_unix(sudo:session): session closed for user root Jul 7 06:04:01.876284 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 7 06:04:01.876565 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:04:01.897295 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 7 06:04:01.898266 auditctl[1715]: No rules Jul 7 06:04:01.899114 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:04:01.899347 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 7 06:04:01.901382 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 06:04:01.923925 augenrules[1734]: No rules Jul 7 06:04:01.925554 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 06:04:01.926740 sudo[1711]: pam_unix(sudo:session): session closed for user root Jul 7 06:04:01.928267 sshd[1704]: pam_unix(sshd:session): session closed for user core Jul 7 06:04:01.939218 systemd[1]: Started sshd@6-10.0.0.84:22-10.0.0.1:44194.service - OpenSSH per-connection server daemon (10.0.0.1:44194). Jul 7 06:04:01.939961 systemd[1]: sshd@5-10.0.0.84:22-10.0.0.1:44190.service: Deactivated successfully. Jul 7 06:04:01.941732 systemd-logind[1522]: Session 6 logged out. Waiting for processes to exit. Jul 7 06:04:01.941811 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 06:04:01.943224 systemd-logind[1522]: Removed session 6. Jul 7 06:04:01.972602 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 44194 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:04:01.973757 sshd[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:04:01.977956 systemd-logind[1522]: New session 7 of user core. Jul 7 06:04:01.989134 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 06:04:02.039883 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 06:04:02.040173 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:04:02.345137 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 06:04:02.345324 (dockerd)[1765]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 06:04:02.598494 dockerd[1765]: time="2025-07-07T06:04:02.597948855Z" level=info msg="Starting up" Jul 7 06:04:02.849743 dockerd[1765]: time="2025-07-07T06:04:02.849644618Z" level=info msg="Loading containers: start." Jul 7 06:04:02.939922 kernel: Initializing XFRM netlink socket Jul 7 06:04:03.006212 systemd-networkd[1228]: docker0: Link UP Jul 7 06:04:03.021059 dockerd[1765]: time="2025-07-07T06:04:03.021024173Z" level=info msg="Loading containers: done." Jul 7 06:04:03.041401 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1537880228-merged.mount: Deactivated successfully. Jul 7 06:04:03.041868 dockerd[1765]: time="2025-07-07T06:04:03.041832271Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 06:04:03.041974 dockerd[1765]: time="2025-07-07T06:04:03.041945089Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 7 06:04:03.042061 dockerd[1765]: time="2025-07-07T06:04:03.042044905Z" level=info msg="Daemon has completed initialization" Jul 7 06:04:03.069015 dockerd[1765]: time="2025-07-07T06:04:03.068879468Z" level=info msg="API listen on /run/docker.sock" Jul 7 06:04:03.069276 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 06:04:03.793594 containerd[1541]: time="2025-07-07T06:04:03.793556376Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 7 06:04:04.361598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3657634450.mount: Deactivated successfully. Jul 7 06:04:05.178696 containerd[1541]: time="2025-07-07T06:04:05.178650506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:05.179183 containerd[1541]: time="2025-07-07T06:04:05.179146072Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651795" Jul 7 06:04:05.179911 containerd[1541]: time="2025-07-07T06:04:05.179850097Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:05.182858 containerd[1541]: time="2025-07-07T06:04:05.182825865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:05.184947 containerd[1541]: time="2025-07-07T06:04:05.184912598Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 1.391312195s" Jul 7 06:04:05.184997 containerd[1541]: time="2025-07-07T06:04:05.184956635Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 7 06:04:05.188650 containerd[1541]: time="2025-07-07T06:04:05.188611691Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 7 06:04:06.096578 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 06:04:06.104092 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:04:06.201770 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:04:06.206478 (kubelet)[1984]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:04:06.243700 kubelet[1984]: E0707 06:04:06.243638 1984 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:04:06.246341 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:04:06.246485 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:04:06.340324 containerd[1541]: time="2025-07-07T06:04:06.340278117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:06.341327 containerd[1541]: time="2025-07-07T06:04:06.340803145Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459679" Jul 7 06:04:06.341775 containerd[1541]: time="2025-07-07T06:04:06.341750554Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:06.345523 containerd[1541]: time="2025-07-07T06:04:06.345464112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:06.346589 containerd[1541]: time="2025-07-07T06:04:06.346488809Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.157838972s" Jul 7 06:04:06.346589 containerd[1541]: time="2025-07-07T06:04:06.346531880Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 7 06:04:06.347092 containerd[1541]: time="2025-07-07T06:04:06.347019150Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 7 06:04:07.339787 containerd[1541]: time="2025-07-07T06:04:07.339737805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:07.340548 containerd[1541]: time="2025-07-07T06:04:07.340485652Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125068" Jul 7 06:04:07.341511 containerd[1541]: time="2025-07-07T06:04:07.341437906Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:07.344323 containerd[1541]: time="2025-07-07T06:04:07.344262571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:07.345468 containerd[1541]: time="2025-07-07T06:04:07.345392565Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 998.341985ms" Jul 7 06:04:07.345468 containerd[1541]: time="2025-07-07T06:04:07.345426875Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 7 06:04:07.346303 containerd[1541]: time="2025-07-07T06:04:07.345814212Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 7 06:04:08.299152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2064621888.mount: Deactivated successfully. Jul 7 06:04:08.639649 containerd[1541]: time="2025-07-07T06:04:08.639407878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:08.640613 containerd[1541]: time="2025-07-07T06:04:08.640364700Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915959" Jul 7 06:04:08.641293 containerd[1541]: time="2025-07-07T06:04:08.641244712Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:08.643587 containerd[1541]: time="2025-07-07T06:04:08.643542292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:08.644146 containerd[1541]: time="2025-07-07T06:04:08.644106297Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.298262007s" Jul 7 06:04:08.644207 containerd[1541]: time="2025-07-07T06:04:08.644145164Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 7 06:04:08.644682 containerd[1541]: time="2025-07-07T06:04:08.644552090Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 06:04:09.121424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3714162813.mount: Deactivated successfully. Jul 7 06:04:09.918179 containerd[1541]: time="2025-07-07T06:04:09.918132497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:09.919240 containerd[1541]: time="2025-07-07T06:04:09.919207202Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 7 06:04:09.919913 containerd[1541]: time="2025-07-07T06:04:09.919867917Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:09.923709 containerd[1541]: time="2025-07-07T06:04:09.923666705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:09.924706 containerd[1541]: time="2025-07-07T06:04:09.924446604Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.279863569s" Jul 7 06:04:09.924706 containerd[1541]: time="2025-07-07T06:04:09.924479302Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 7 06:04:09.925039 containerd[1541]: time="2025-07-07T06:04:09.925008782Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 06:04:10.357976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2217551532.mount: Deactivated successfully. Jul 7 06:04:10.367317 containerd[1541]: time="2025-07-07T06:04:10.367265456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:10.368048 containerd[1541]: time="2025-07-07T06:04:10.368015393Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 7 06:04:10.368840 containerd[1541]: time="2025-07-07T06:04:10.368791628Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:10.370965 containerd[1541]: time="2025-07-07T06:04:10.370925128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:10.372053 containerd[1541]: time="2025-07-07T06:04:10.371922702Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 446.877652ms" Jul 7 06:04:10.372053 containerd[1541]: time="2025-07-07T06:04:10.371958274Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 7 06:04:10.372527 containerd[1541]: time="2025-07-07T06:04:10.372504617Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 7 06:04:10.930282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3246658786.mount: Deactivated successfully. Jul 7 06:04:12.270009 containerd[1541]: time="2025-07-07T06:04:12.269963249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:12.271051 containerd[1541]: time="2025-07-07T06:04:12.270739692Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Jul 7 06:04:12.271783 containerd[1541]: time="2025-07-07T06:04:12.271739409Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:12.277063 containerd[1541]: time="2025-07-07T06:04:12.277029981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:12.278343 containerd[1541]: time="2025-07-07T06:04:12.278313102Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.905777579s" Jul 7 06:04:12.278591 containerd[1541]: time="2025-07-07T06:04:12.278420407Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 7 06:04:16.258165 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 06:04:16.267133 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:04:16.381555 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:04:16.385696 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:04:16.415026 kubelet[2150]: E0707 06:04:16.414967 2150 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:04:16.417504 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:04:16.417758 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:04:17.922220 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:04:17.937124 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:04:17.956251 systemd[1]: Reloading requested from client PID 2169 ('systemctl') (unit session-7.scope)... Jul 7 06:04:17.956264 systemd[1]: Reloading... Jul 7 06:04:18.006918 zram_generator::config[2211]: No configuration found. Jul 7 06:04:18.112826 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:04:18.165494 systemd[1]: Reloading finished in 208 ms. Jul 7 06:04:18.202596 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 06:04:18.202655 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 06:04:18.202920 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:04:18.204331 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:04:18.302333 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:04:18.306720 (kubelet)[2265]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:04:18.343901 kubelet[2265]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:04:18.343901 kubelet[2265]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 06:04:18.343901 kubelet[2265]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:04:18.344323 kubelet[2265]: I0707 06:04:18.343966 2265 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:04:18.881681 kubelet[2265]: I0707 06:04:18.881637 2265 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 06:04:18.881681 kubelet[2265]: I0707 06:04:18.881674 2265 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:04:18.881963 kubelet[2265]: I0707 06:04:18.881936 2265 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 06:04:18.934387 kubelet[2265]: E0707 06:04:18.934354 2265 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.84:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:04:18.935443 kubelet[2265]: I0707 06:04:18.935315 2265 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:04:18.941789 kubelet[2265]: E0707 06:04:18.941756 2265 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 06:04:18.941976 kubelet[2265]: I0707 06:04:18.941905 2265 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 06:04:18.945293 kubelet[2265]: I0707 06:04:18.945269 2265 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:04:18.946318 kubelet[2265]: I0707 06:04:18.946245 2265 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 06:04:18.946410 kubelet[2265]: I0707 06:04:18.946382 2265 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:04:18.946565 kubelet[2265]: I0707 06:04:18.946411 2265 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 7 06:04:18.946646 kubelet[2265]: I0707 06:04:18.946636 2265 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:04:18.946646 kubelet[2265]: I0707 06:04:18.946645 2265 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 06:04:18.946906 kubelet[2265]: I0707 06:04:18.946875 2265 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:04:18.948852 kubelet[2265]: I0707 06:04:18.948786 2265 kubelet.go:408] "Attempting to sync node with API server" Jul 7 06:04:18.948852 kubelet[2265]: I0707 06:04:18.948816 2265 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:04:18.948852 kubelet[2265]: I0707 06:04:18.948843 2265 kubelet.go:314] "Adding apiserver pod source" Jul 7 06:04:18.948970 kubelet[2265]: I0707 06:04:18.948934 2265 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:04:18.952176 kubelet[2265]: W0707 06:04:18.951946 2265 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 7 06:04:18.952176 kubelet[2265]: E0707 06:04:18.952070 2265 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:04:18.952781 kubelet[2265]: W0707 06:04:18.952707 2265 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 7 06:04:18.952781 kubelet[2265]: E0707 06:04:18.952763 2265 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:04:18.955706 kubelet[2265]: I0707 06:04:18.955676 2265 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 06:04:18.956543 kubelet[2265]: I0707 06:04:18.956518 2265 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 06:04:18.956769 kubelet[2265]: W0707 06:04:18.956626 2265 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 06:04:18.958650 kubelet[2265]: I0707 06:04:18.958617 2265 server.go:1274] "Started kubelet" Jul 7 06:04:18.959068 kubelet[2265]: I0707 06:04:18.959017 2265 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:04:18.960779 kubelet[2265]: I0707 06:04:18.960279 2265 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:04:18.960779 kubelet[2265]: I0707 06:04:18.960667 2265 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:04:18.960779 kubelet[2265]: I0707 06:04:18.960684 2265 server.go:449] "Adding debug handlers to kubelet server" Jul 7 06:04:18.964884 kubelet[2265]: I0707 06:04:18.963200 2265 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:04:18.964884 kubelet[2265]: I0707 06:04:18.963440 2265 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:04:18.964884 kubelet[2265]: I0707 06:04:18.964315 2265 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 06:04:18.965283 kubelet[2265]: I0707 06:04:18.965260 2265 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 06:04:18.965328 kubelet[2265]: I0707 06:04:18.965316 2265 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:04:18.966463 kubelet[2265]: E0707 06:04:18.966037 2265 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:04:18.966463 kubelet[2265]: E0707 06:04:18.966184 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="200ms" Jul 7 06:04:18.966463 kubelet[2265]: W0707 06:04:18.966260 2265 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 7 06:04:18.966463 kubelet[2265]: E0707 06:04:18.966310 2265 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:04:18.966625 kubelet[2265]: I0707 06:04:18.966609 2265 factory.go:221] Registration of the systemd container factory successfully Jul 7 06:04:18.967427 kubelet[2265]: I0707 06:04:18.966693 2265 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:04:18.967801 kubelet[2265]: E0707 06:04:18.966229 2265 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.84:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.84:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fe2e654823cc4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-07 06:04:18.958589124 +0000 UTC m=+0.648274713,LastTimestamp:2025-07-07 06:04:18.958589124 +0000 UTC m=+0.648274713,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 7 06:04:18.969732 kubelet[2265]: E0707 06:04:18.969625 2265 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:04:18.969823 kubelet[2265]: I0707 06:04:18.969762 2265 factory.go:221] Registration of the containerd container factory successfully Jul 7 06:04:18.977237 kubelet[2265]: I0707 06:04:18.977117 2265 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 06:04:18.978145 kubelet[2265]: I0707 06:04:18.978129 2265 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 06:04:18.978271 kubelet[2265]: I0707 06:04:18.978259 2265 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 06:04:18.978341 kubelet[2265]: I0707 06:04:18.978331 2265 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 06:04:18.978439 kubelet[2265]: E0707 06:04:18.978422 2265 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:04:18.984192 kubelet[2265]: W0707 06:04:18.984149 2265 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 7 06:04:18.984431 kubelet[2265]: E0707 06:04:18.984300 2265 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:04:18.990094 kubelet[2265]: I0707 06:04:18.990057 2265 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 06:04:18.990189 kubelet[2265]: I0707 06:04:18.990112 2265 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 06:04:18.990189 kubelet[2265]: I0707 06:04:18.990129 2265 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:04:19.066757 kubelet[2265]: E0707 06:04:19.066727 2265 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:04:19.074768 kubelet[2265]: I0707 06:04:19.074748 2265 policy_none.go:49] "None policy: Start" Jul 7 06:04:19.075854 kubelet[2265]: I0707 06:04:19.075555 2265 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 06:04:19.075854 kubelet[2265]: I0707 06:04:19.075582 2265 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:04:19.078581 kubelet[2265]: E0707 06:04:19.078546 2265 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 06:04:19.079865 kubelet[2265]: I0707 06:04:19.079840 2265 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 06:04:19.080148 kubelet[2265]: I0707 06:04:19.080124 2265 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:04:19.080424 kubelet[2265]: I0707 06:04:19.080216 2265 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:04:19.080629 kubelet[2265]: I0707 06:04:19.080611 2265 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:04:19.081626 kubelet[2265]: E0707 06:04:19.081604 2265 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 7 06:04:19.167401 kubelet[2265]: E0707 06:04:19.167317 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="400ms" Jul 7 06:04:19.182272 kubelet[2265]: I0707 06:04:19.182251 2265 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 06:04:19.182677 kubelet[2265]: E0707 06:04:19.182653 2265 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Jul 7 06:04:19.366170 kubelet[2265]: I0707 06:04:19.366074 2265 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b9020e16f06b9c1519b0d82dfc2dd2b8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b9020e16f06b9c1519b0d82dfc2dd2b8\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:04:19.366170 kubelet[2265]: I0707 06:04:19.366108 2265 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b9020e16f06b9c1519b0d82dfc2dd2b8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b9020e16f06b9c1519b0d82dfc2dd2b8\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:04:19.366170 kubelet[2265]: I0707 06:04:19.366126 2265 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:04:19.366170 kubelet[2265]: I0707 06:04:19.366148 2265 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 7 06:04:19.366170 kubelet[2265]: I0707 06:04:19.366167 2265 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:04:19.366578 kubelet[2265]: I0707 06:04:19.366183 2265 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b9020e16f06b9c1519b0d82dfc2dd2b8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b9020e16f06b9c1519b0d82dfc2dd2b8\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:04:19.366578 kubelet[2265]: I0707 06:04:19.366198 2265 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:04:19.366578 kubelet[2265]: I0707 06:04:19.366216 2265 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:04:19.366578 kubelet[2265]: I0707 06:04:19.366238 2265 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:04:19.383995 kubelet[2265]: I0707 06:04:19.383967 2265 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 06:04:19.384287 kubelet[2265]: E0707 06:04:19.384264 2265 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Jul 7 06:04:19.568681 kubelet[2265]: E0707 06:04:19.568635 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="800ms" Jul 7 06:04:19.583837 kubelet[2265]: E0707 06:04:19.583805 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:19.584554 containerd[1541]: time="2025-07-07T06:04:19.584440019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b9020e16f06b9c1519b0d82dfc2dd2b8,Namespace:kube-system,Attempt:0,}" Jul 7 06:04:19.585169 kubelet[2265]: E0707 06:04:19.584458 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:19.585215 containerd[1541]: time="2025-07-07T06:04:19.584787967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 7 06:04:19.585317 kubelet[2265]: E0707 06:04:19.585295 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:19.585981 containerd[1541]: time="2025-07-07T06:04:19.585862767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 7 06:04:19.785934 kubelet[2265]: I0707 06:04:19.785883 2265 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 06:04:19.786219 kubelet[2265]: E0707 06:04:19.786188 2265 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Jul 7 06:04:19.802702 kubelet[2265]: W0707 06:04:19.802647 2265 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 7 06:04:19.802760 kubelet[2265]: E0707 06:04:19.802705 2265 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:04:19.993591 kubelet[2265]: W0707 06:04:19.993475 2265 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 7 06:04:19.993591 kubelet[2265]: E0707 06:04:19.993523 2265 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:04:20.097274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount242599636.mount: Deactivated successfully. Jul 7 06:04:20.102046 containerd[1541]: time="2025-07-07T06:04:20.102003907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:04:20.102936 containerd[1541]: time="2025-07-07T06:04:20.102909872Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:04:20.103615 containerd[1541]: time="2025-07-07T06:04:20.103458128Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 06:04:20.104266 containerd[1541]: time="2025-07-07T06:04:20.104245137Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 7 06:04:20.104781 containerd[1541]: time="2025-07-07T06:04:20.104736897Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:04:20.106133 containerd[1541]: time="2025-07-07T06:04:20.106092902Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:04:20.107213 containerd[1541]: time="2025-07-07T06:04:20.107153298Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 06:04:20.109630 containerd[1541]: time="2025-07-07T06:04:20.109549799Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 524.707212ms" Jul 7 06:04:20.110905 containerd[1541]: time="2025-07-07T06:04:20.110836295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:04:20.111628 containerd[1541]: time="2025-07-07T06:04:20.111541584Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 527.019155ms" Jul 7 06:04:20.114835 containerd[1541]: time="2025-07-07T06:04:20.114778466Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 528.853272ms" Jul 7 06:04:20.159126 kubelet[2265]: W0707 06:04:20.158875 2265 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 7 06:04:20.159126 kubelet[2265]: E0707 06:04:20.158938 2265 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:04:20.235133 containerd[1541]: time="2025-07-07T06:04:20.234558272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:04:20.235133 containerd[1541]: time="2025-07-07T06:04:20.234929195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:04:20.235133 containerd[1541]: time="2025-07-07T06:04:20.234980004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:04:20.235133 containerd[1541]: time="2025-07-07T06:04:20.235089471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:04:20.235806 containerd[1541]: time="2025-07-07T06:04:20.235732900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:04:20.235806 containerd[1541]: time="2025-07-07T06:04:20.235782028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:04:20.235806 containerd[1541]: time="2025-07-07T06:04:20.235797122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:04:20.235969 containerd[1541]: time="2025-07-07T06:04:20.235884968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:04:20.236818 containerd[1541]: time="2025-07-07T06:04:20.236562510Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:04:20.236818 containerd[1541]: time="2025-07-07T06:04:20.236619406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:04:20.236818 containerd[1541]: time="2025-07-07T06:04:20.236634581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:04:20.236818 containerd[1541]: time="2025-07-07T06:04:20.236709774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:04:20.283831 containerd[1541]: time="2025-07-07T06:04:20.283776871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"42f23f125315a7a015a39804dfdac1c6eb09bde4ed0cff087bbd2e50960415e3\"" Jul 7 06:04:20.285405 kubelet[2265]: E0707 06:04:20.285371 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:20.285468 containerd[1541]: time="2025-07-07T06:04:20.285386884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b9020e16f06b9c1519b0d82dfc2dd2b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"936d9c093b40501917341ae64bceb348c6afbf6ebe537f638cce5ef928986d8e\"" Jul 7 06:04:20.286683 kubelet[2265]: E0707 06:04:20.286661 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:20.288278 containerd[1541]: time="2025-07-07T06:04:20.288249680Z" level=info msg="CreateContainer within sandbox \"42f23f125315a7a015a39804dfdac1c6eb09bde4ed0cff087bbd2e50960415e3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 06:04:20.288770 containerd[1541]: time="2025-07-07T06:04:20.288743523Z" level=info msg="CreateContainer within sandbox \"936d9c093b40501917341ae64bceb348c6afbf6ebe537f638cce5ef928986d8e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 06:04:20.289728 containerd[1541]: time="2025-07-07T06:04:20.289657656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"88dcdaa9a7a4a25b5a6135e092abdf600889fb67e68925c63ce5ac6a942fef86\"" Jul 7 06:04:20.290340 kubelet[2265]: E0707 06:04:20.290320 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:20.292346 containerd[1541]: time="2025-07-07T06:04:20.292311408Z" level=info msg="CreateContainer within sandbox \"88dcdaa9a7a4a25b5a6135e092abdf600889fb67e68925c63ce5ac6a942fef86\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 06:04:20.306475 containerd[1541]: time="2025-07-07T06:04:20.306367419Z" level=info msg="CreateContainer within sandbox \"42f23f125315a7a015a39804dfdac1c6eb09bde4ed0cff087bbd2e50960415e3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"816d48f922a9add2ea7aa98335a757b18d49b67b0d85175eed62b64bcd91a4fb\"" Jul 7 06:04:20.307009 containerd[1541]: time="2025-07-07T06:04:20.306974051Z" level=info msg="StartContainer for \"816d48f922a9add2ea7aa98335a757b18d49b67b0d85175eed62b64bcd91a4fb\"" Jul 7 06:04:20.309258 containerd[1541]: time="2025-07-07T06:04:20.309173640Z" level=info msg="CreateContainer within sandbox \"88dcdaa9a7a4a25b5a6135e092abdf600889fb67e68925c63ce5ac6a942fef86\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"93711ce5734279eb14d18080fe7183b59613426d66b10b83a9f8b02a772ec774\"" Jul 7 06:04:20.309861 containerd[1541]: time="2025-07-07T06:04:20.309507486Z" level=info msg="StartContainer for \"93711ce5734279eb14d18080fe7183b59613426d66b10b83a9f8b02a772ec774\"" Jul 7 06:04:20.310349 containerd[1541]: time="2025-07-07T06:04:20.310252093Z" level=info msg="CreateContainer within sandbox \"936d9c093b40501917341ae64bceb348c6afbf6ebe537f638cce5ef928986d8e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1db75528cd6e5c3bc731a35286659c343cf8ce35ecfe8c491b6cfce70b2f19eb\"" Jul 7 06:04:20.310636 containerd[1541]: time="2025-07-07T06:04:20.310547662Z" level=info msg="StartContainer for \"1db75528cd6e5c3bc731a35286659c343cf8ce35ecfe8c491b6cfce70b2f19eb\"" Jul 7 06:04:20.354586 kubelet[2265]: W0707 06:04:20.354509 2265 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Jul 7 06:04:20.354689 kubelet[2265]: E0707 06:04:20.354649 2265 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:04:20.374528 kubelet[2265]: E0707 06:04:20.369581 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="1.6s" Jul 7 06:04:20.398354 containerd[1541]: time="2025-07-07T06:04:20.393030875Z" level=info msg="StartContainer for \"816d48f922a9add2ea7aa98335a757b18d49b67b0d85175eed62b64bcd91a4fb\" returns successfully" Jul 7 06:04:20.398354 containerd[1541]: time="2025-07-07T06:04:20.393154396Z" level=info msg="StartContainer for \"1db75528cd6e5c3bc731a35286659c343cf8ce35ecfe8c491b6cfce70b2f19eb\" returns successfully" Jul 7 06:04:20.398354 containerd[1541]: time="2025-07-07T06:04:20.393215095Z" level=info msg="StartContainer for \"93711ce5734279eb14d18080fe7183b59613426d66b10b83a9f8b02a772ec774\" returns successfully" Jul 7 06:04:20.588264 kubelet[2265]: I0707 06:04:20.587650 2265 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 06:04:20.990870 kubelet[2265]: E0707 06:04:20.990633 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:20.994830 kubelet[2265]: E0707 06:04:20.994703 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:20.995621 kubelet[2265]: E0707 06:04:20.995575 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:21.998064 kubelet[2265]: E0707 06:04:21.998034 2265 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:22.013769 kubelet[2265]: E0707 06:04:22.013694 2265 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 7 06:04:22.135437 kubelet[2265]: I0707 06:04:22.135333 2265 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 7 06:04:22.135437 kubelet[2265]: E0707 06:04:22.135373 2265 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 7 06:04:22.145679 kubelet[2265]: E0707 06:04:22.145652 2265 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:04:22.956150 kubelet[2265]: I0707 06:04:22.956105 2265 apiserver.go:52] "Watching apiserver" Jul 7 06:04:22.966081 kubelet[2265]: I0707 06:04:22.966000 2265 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 06:04:23.773294 systemd[1]: Reloading requested from client PID 2543 ('systemctl') (unit session-7.scope)... Jul 7 06:04:23.773309 systemd[1]: Reloading... Jul 7 06:04:23.834921 zram_generator::config[2583]: No configuration found. Jul 7 06:04:23.920746 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:04:23.979446 systemd[1]: Reloading finished in 205 ms. Jul 7 06:04:24.006606 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:04:24.022774 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 06:04:24.023111 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:04:24.033332 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:04:24.125347 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:04:24.129187 (kubelet)[2634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:04:24.159985 kubelet[2634]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:04:24.159985 kubelet[2634]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 06:04:24.159985 kubelet[2634]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:04:24.159985 kubelet[2634]: I0707 06:04:24.159271 2634 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:04:24.167277 kubelet[2634]: I0707 06:04:24.167239 2634 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 06:04:24.167277 kubelet[2634]: I0707 06:04:24.167266 2634 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:04:24.167502 kubelet[2634]: I0707 06:04:24.167485 2634 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 06:04:24.168835 kubelet[2634]: I0707 06:04:24.168816 2634 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 06:04:24.170749 kubelet[2634]: I0707 06:04:24.170712 2634 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:04:24.175699 kubelet[2634]: E0707 06:04:24.175614 2634 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 06:04:24.175699 kubelet[2634]: I0707 06:04:24.175642 2634 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 06:04:24.179877 kubelet[2634]: I0707 06:04:24.179847 2634 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:04:24.180263 kubelet[2634]: I0707 06:04:24.180249 2634 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 06:04:24.180412 kubelet[2634]: I0707 06:04:24.180380 2634 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:04:24.180594 kubelet[2634]: I0707 06:04:24.180412 2634 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 7 06:04:24.180677 kubelet[2634]: I0707 06:04:24.180610 2634 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:04:24.180677 kubelet[2634]: I0707 06:04:24.180620 2634 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 06:04:24.180677 kubelet[2634]: I0707 06:04:24.180653 2634 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:04:24.180772 kubelet[2634]: I0707 06:04:24.180758 2634 kubelet.go:408] "Attempting to sync node with API server" Jul 7 06:04:24.180797 kubelet[2634]: I0707 06:04:24.180775 2634 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:04:24.180797 kubelet[2634]: I0707 06:04:24.180793 2634 kubelet.go:314] "Adding apiserver pod source" Jul 7 06:04:24.180834 kubelet[2634]: I0707 06:04:24.180806 2634 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:04:24.184547 kubelet[2634]: I0707 06:04:24.184498 2634 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 06:04:24.186260 kubelet[2634]: I0707 06:04:24.186235 2634 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 06:04:24.186730 kubelet[2634]: I0707 06:04:24.186713 2634 server.go:1274] "Started kubelet" Jul 7 06:04:24.190898 kubelet[2634]: I0707 06:04:24.188739 2634 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:04:24.193972 kubelet[2634]: I0707 06:04:24.191859 2634 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:04:24.193972 kubelet[2634]: I0707 06:04:24.192281 2634 server.go:449] "Adding debug handlers to kubelet server" Jul 7 06:04:24.193972 kubelet[2634]: I0707 06:04:24.192335 2634 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:04:24.193972 kubelet[2634]: I0707 06:04:24.192837 2634 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:04:24.199858 kubelet[2634]: I0707 06:04:24.199613 2634 factory.go:221] Registration of the systemd container factory successfully Jul 7 06:04:24.199858 kubelet[2634]: I0707 06:04:24.199715 2634 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:04:24.200820 kubelet[2634]: I0707 06:04:24.200790 2634 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:04:24.204836 kubelet[2634]: I0707 06:04:24.202199 2634 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 06:04:24.204836 kubelet[2634]: E0707 06:04:24.202297 2634 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:04:24.204836 kubelet[2634]: I0707 06:04:24.202545 2634 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 06:04:24.204836 kubelet[2634]: I0707 06:04:24.202881 2634 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:04:24.209082 kubelet[2634]: I0707 06:04:24.205634 2634 factory.go:221] Registration of the containerd container factory successfully Jul 7 06:04:24.209082 kubelet[2634]: I0707 06:04:24.206273 2634 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 06:04:24.209082 kubelet[2634]: E0707 06:04:24.209015 2634 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:04:24.213033 kubelet[2634]: I0707 06:04:24.212747 2634 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 06:04:24.213033 kubelet[2634]: I0707 06:04:24.212773 2634 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 06:04:24.213033 kubelet[2634]: I0707 06:04:24.212793 2634 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 06:04:24.213033 kubelet[2634]: E0707 06:04:24.212853 2634 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:04:24.245062 kubelet[2634]: I0707 06:04:24.245026 2634 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 06:04:24.245062 kubelet[2634]: I0707 06:04:24.245047 2634 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 06:04:24.245062 kubelet[2634]: I0707 06:04:24.245067 2634 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:04:24.245232 kubelet[2634]: I0707 06:04:24.245212 2634 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 06:04:24.245260 kubelet[2634]: I0707 06:04:24.245229 2634 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 06:04:24.245260 kubelet[2634]: I0707 06:04:24.245247 2634 policy_none.go:49] "None policy: Start" Jul 7 06:04:24.245771 kubelet[2634]: I0707 06:04:24.245743 2634 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 06:04:24.245771 kubelet[2634]: I0707 06:04:24.245769 2634 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:04:24.245923 kubelet[2634]: I0707 06:04:24.245911 2634 state_mem.go:75] "Updated machine memory state" Jul 7 06:04:24.246989 kubelet[2634]: I0707 06:04:24.246964 2634 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 06:04:24.247279 kubelet[2634]: I0707 06:04:24.247121 2634 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:04:24.247279 kubelet[2634]: I0707 06:04:24.247137 2634 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:04:24.247353 kubelet[2634]: I0707 06:04:24.247321 2634 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:04:24.351393 kubelet[2634]: I0707 06:04:24.351291 2634 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 06:04:24.357848 kubelet[2634]: I0707 06:04:24.357813 2634 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 7 06:04:24.357989 kubelet[2634]: I0707 06:04:24.357909 2634 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 7 06:04:24.403244 kubelet[2634]: I0707 06:04:24.403205 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 7 06:04:24.403244 kubelet[2634]: I0707 06:04:24.403239 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:04:24.403389 kubelet[2634]: I0707 06:04:24.403257 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:04:24.403389 kubelet[2634]: I0707 06:04:24.403273 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:04:24.403389 kubelet[2634]: I0707 06:04:24.403290 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:04:24.403389 kubelet[2634]: I0707 06:04:24.403306 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b9020e16f06b9c1519b0d82dfc2dd2b8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b9020e16f06b9c1519b0d82dfc2dd2b8\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:04:24.403389 kubelet[2634]: I0707 06:04:24.403320 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b9020e16f06b9c1519b0d82dfc2dd2b8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b9020e16f06b9c1519b0d82dfc2dd2b8\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:04:24.403528 kubelet[2634]: I0707 06:04:24.403342 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b9020e16f06b9c1519b0d82dfc2dd2b8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b9020e16f06b9c1519b0d82dfc2dd2b8\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:04:24.403528 kubelet[2634]: I0707 06:04:24.403358 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:04:24.626619 kubelet[2634]: E0707 06:04:24.626463 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:24.626619 kubelet[2634]: E0707 06:04:24.626503 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:24.626619 kubelet[2634]: E0707 06:04:24.626533 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:25.183640 kubelet[2634]: I0707 06:04:25.183606 2634 apiserver.go:52] "Watching apiserver" Jul 7 06:04:25.202785 kubelet[2634]: I0707 06:04:25.202747 2634 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 06:04:25.223806 kubelet[2634]: E0707 06:04:25.223721 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:25.227773 kubelet[2634]: E0707 06:04:25.227655 2634 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 7 06:04:25.227965 kubelet[2634]: E0707 06:04:25.227848 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:25.229478 kubelet[2634]: E0707 06:04:25.229435 2634 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 7 06:04:25.230147 kubelet[2634]: E0707 06:04:25.229592 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:25.242325 kubelet[2634]: I0707 06:04:25.242209 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.242185142 podStartE2EDuration="1.242185142s" podCreationTimestamp="2025-07-07 06:04:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:04:25.242116307 +0000 UTC m=+1.109969450" watchObservedRunningTime="2025-07-07 06:04:25.242185142 +0000 UTC m=+1.110038245" Jul 7 06:04:25.255568 kubelet[2634]: I0707 06:04:25.255516 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.2554986160000001 podStartE2EDuration="1.255498616s" podCreationTimestamp="2025-07-07 06:04:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:04:25.249590774 +0000 UTC m=+1.117443877" watchObservedRunningTime="2025-07-07 06:04:25.255498616 +0000 UTC m=+1.123351719" Jul 7 06:04:25.265115 kubelet[2634]: I0707 06:04:25.265061 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.265045601 podStartE2EDuration="1.265045601s" podCreationTimestamp="2025-07-07 06:04:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:04:25.255918466 +0000 UTC m=+1.123771569" watchObservedRunningTime="2025-07-07 06:04:25.265045601 +0000 UTC m=+1.132898704" Jul 7 06:04:26.224808 kubelet[2634]: E0707 06:04:26.224778 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:26.225518 kubelet[2634]: E0707 06:04:26.224837 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:27.228736 kubelet[2634]: E0707 06:04:27.228689 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:29.480853 kubelet[2634]: I0707 06:04:29.480818 2634 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 06:04:29.481740 containerd[1541]: time="2025-07-07T06:04:29.481437496Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 06:04:29.482061 kubelet[2634]: I0707 06:04:29.481756 2634 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 06:04:29.856542 kubelet[2634]: E0707 06:04:29.856450 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:30.195013 kubelet[2634]: E0707 06:04:30.194883 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:30.233570 kubelet[2634]: E0707 06:04:30.233537 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:30.444143 kubelet[2634]: I0707 06:04:30.444101 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed57c627-e795-40bd-9ecf-79d27de778e3-lib-modules\") pod \"kube-proxy-6zssh\" (UID: \"ed57c627-e795-40bd-9ecf-79d27de778e3\") " pod="kube-system/kube-proxy-6zssh" Jul 7 06:04:30.444143 kubelet[2634]: I0707 06:04:30.444148 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ed57c627-e795-40bd-9ecf-79d27de778e3-kube-proxy\") pod \"kube-proxy-6zssh\" (UID: \"ed57c627-e795-40bd-9ecf-79d27de778e3\") " pod="kube-system/kube-proxy-6zssh" Jul 7 06:04:30.444303 kubelet[2634]: I0707 06:04:30.444174 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed57c627-e795-40bd-9ecf-79d27de778e3-xtables-lock\") pod \"kube-proxy-6zssh\" (UID: \"ed57c627-e795-40bd-9ecf-79d27de778e3\") " pod="kube-system/kube-proxy-6zssh" Jul 7 06:04:30.444303 kubelet[2634]: I0707 06:04:30.444191 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dbjd\" (UniqueName: \"kubernetes.io/projected/ed57c627-e795-40bd-9ecf-79d27de778e3-kube-api-access-6dbjd\") pod \"kube-proxy-6zssh\" (UID: \"ed57c627-e795-40bd-9ecf-79d27de778e3\") " pod="kube-system/kube-proxy-6zssh" Jul 7 06:04:30.545334 kubelet[2634]: I0707 06:04:30.545272 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d0473a6d-5c91-40bb-8d27-d4ab16b76119-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-cn7l2\" (UID: \"d0473a6d-5c91-40bb-8d27-d4ab16b76119\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-cn7l2" Jul 7 06:04:30.545698 kubelet[2634]: I0707 06:04:30.545431 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sqql\" (UniqueName: \"kubernetes.io/projected/d0473a6d-5c91-40bb-8d27-d4ab16b76119-kube-api-access-4sqql\") pod \"tigera-operator-5bf8dfcb4-cn7l2\" (UID: \"d0473a6d-5c91-40bb-8d27-d4ab16b76119\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-cn7l2" Jul 7 06:04:30.703437 kubelet[2634]: E0707 06:04:30.703408 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:30.703963 containerd[1541]: time="2025-07-07T06:04:30.703927327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6zssh,Uid:ed57c627-e795-40bd-9ecf-79d27de778e3,Namespace:kube-system,Attempt:0,}" Jul 7 06:04:30.721738 containerd[1541]: time="2025-07-07T06:04:30.721637260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:04:30.721738 containerd[1541]: time="2025-07-07T06:04:30.721680756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:04:30.721738 containerd[1541]: time="2025-07-07T06:04:30.721692160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:04:30.721901 containerd[1541]: time="2025-07-07T06:04:30.721774549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:04:30.749054 containerd[1541]: time="2025-07-07T06:04:30.749015328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6zssh,Uid:ed57c627-e795-40bd-9ecf-79d27de778e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ab8e0cdf1b3498b710092e260d726c316bf9ba8935adcec70399f0ec2c01b1e\"" Jul 7 06:04:30.749804 kubelet[2634]: E0707 06:04:30.749587 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:30.751456 containerd[1541]: time="2025-07-07T06:04:30.751410614Z" level=info msg="CreateContainer within sandbox \"6ab8e0cdf1b3498b710092e260d726c316bf9ba8935adcec70399f0ec2c01b1e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 06:04:30.762489 containerd[1541]: time="2025-07-07T06:04:30.762449232Z" level=info msg="CreateContainer within sandbox \"6ab8e0cdf1b3498b710092e260d726c316bf9ba8935adcec70399f0ec2c01b1e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5a3580f58d06fda8634bbce30524b8af430b84f512df2c3bf7a4d1b8850863c9\"" Jul 7 06:04:30.762960 containerd[1541]: time="2025-07-07T06:04:30.762929001Z" level=info msg="StartContainer for \"5a3580f58d06fda8634bbce30524b8af430b84f512df2c3bf7a4d1b8850863c9\"" Jul 7 06:04:30.808250 containerd[1541]: time="2025-07-07T06:04:30.808126801Z" level=info msg="StartContainer for \"5a3580f58d06fda8634bbce30524b8af430b84f512df2c3bf7a4d1b8850863c9\" returns successfully" Jul 7 06:04:30.820841 containerd[1541]: time="2025-07-07T06:04:30.820765704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-cn7l2,Uid:d0473a6d-5c91-40bb-8d27-d4ab16b76119,Namespace:tigera-operator,Attempt:0,}" Jul 7 06:04:30.844701 containerd[1541]: time="2025-07-07T06:04:30.844570590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:04:30.844701 containerd[1541]: time="2025-07-07T06:04:30.844630891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:04:30.844701 containerd[1541]: time="2025-07-07T06:04:30.844642095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:04:30.844828 containerd[1541]: time="2025-07-07T06:04:30.844771101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:04:30.892109 containerd[1541]: time="2025-07-07T06:04:30.892072083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-cn7l2,Uid:d0473a6d-5c91-40bb-8d27-d4ab16b76119,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"927172779982f5a8ab4c048de37a9d6fe3e4874f7c56436c3e9474ce92029310\"" Jul 7 06:04:30.894354 containerd[1541]: time="2025-07-07T06:04:30.894330321Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 06:04:31.236821 kubelet[2634]: E0707 06:04:31.235927 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:31.245268 kubelet[2634]: I0707 06:04:31.245187 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6zssh" podStartSLOduration=1.245173729 podStartE2EDuration="1.245173729s" podCreationTimestamp="2025-07-07 06:04:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:04:31.24464039 +0000 UTC m=+7.112493493" watchObservedRunningTime="2025-07-07 06:04:31.245173729 +0000 UTC m=+7.113026872" Jul 7 06:04:32.228588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1733862816.mount: Deactivated successfully. Jul 7 06:04:32.682807 containerd[1541]: time="2025-07-07T06:04:32.682748699Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:32.683318 containerd[1541]: time="2025-07-07T06:04:32.683275586Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 7 06:04:32.684278 containerd[1541]: time="2025-07-07T06:04:32.684241531Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:32.686531 containerd[1541]: time="2025-07-07T06:04:32.686484680Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:32.687386 containerd[1541]: time="2025-07-07T06:04:32.687361557Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.792892628s" Jul 7 06:04:32.687436 containerd[1541]: time="2025-07-07T06:04:32.687391766Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 7 06:04:32.689523 containerd[1541]: time="2025-07-07T06:04:32.689420127Z" level=info msg="CreateContainer within sandbox \"927172779982f5a8ab4c048de37a9d6fe3e4874f7c56436c3e9474ce92029310\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 06:04:32.699823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2412828812.mount: Deactivated successfully. Jul 7 06:04:32.703723 containerd[1541]: time="2025-07-07T06:04:32.703686475Z" level=info msg="CreateContainer within sandbox \"927172779982f5a8ab4c048de37a9d6fe3e4874f7c56436c3e9474ce92029310\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e10e731c5bcdfe16897036c31f49cefbe3926c1c96432393e92441f7f0125867\"" Jul 7 06:04:32.704198 containerd[1541]: time="2025-07-07T06:04:32.704172829Z" level=info msg="StartContainer for \"e10e731c5bcdfe16897036c31f49cefbe3926c1c96432393e92441f7f0125867\"" Jul 7 06:04:32.746472 containerd[1541]: time="2025-07-07T06:04:32.746437705Z" level=info msg="StartContainer for \"e10e731c5bcdfe16897036c31f49cefbe3926c1c96432393e92441f7f0125867\" returns successfully" Jul 7 06:04:35.862908 kubelet[2634]: E0707 06:04:35.862303 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:35.881469 kubelet[2634]: I0707 06:04:35.881368 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-cn7l2" podStartSLOduration=4.086398197 podStartE2EDuration="5.881347763s" podCreationTimestamp="2025-07-07 06:04:30 +0000 UTC" firstStartedPulling="2025-07-07 06:04:30.893152505 +0000 UTC m=+6.761005568" lastFinishedPulling="2025-07-07 06:04:32.688102031 +0000 UTC m=+8.555955134" observedRunningTime="2025-07-07 06:04:33.25005102 +0000 UTC m=+9.117904123" watchObservedRunningTime="2025-07-07 06:04:35.881347763 +0000 UTC m=+11.749200826" Jul 7 06:04:38.111181 sudo[1747]: pam_unix(sudo:session): session closed for user root Jul 7 06:04:38.120137 sshd[1740]: pam_unix(sshd:session): session closed for user core Jul 7 06:04:38.125278 systemd[1]: sshd@6-10.0.0.84:22-10.0.0.1:44194.service: Deactivated successfully. Jul 7 06:04:38.125348 systemd-logind[1522]: Session 7 logged out. Waiting for processes to exit. Jul 7 06:04:38.127497 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 06:04:38.128954 systemd-logind[1522]: Removed session 7. Jul 7 06:04:39.870244 kubelet[2634]: E0707 06:04:39.870203 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:40.112106 update_engine[1530]: I20250707 06:04:40.111915 1530 update_attempter.cc:509] Updating boot flags... Jul 7 06:04:40.138358 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3040) Jul 7 06:04:40.188960 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3042) Jul 7 06:04:42.936070 kubelet[2634]: I0707 06:04:42.935946 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60cd87b2-efcb-4382-98d7-7c7b553d91ee-tigera-ca-bundle\") pod \"calico-typha-7579c85bff-274w8\" (UID: \"60cd87b2-efcb-4382-98d7-7c7b553d91ee\") " pod="calico-system/calico-typha-7579c85bff-274w8" Jul 7 06:04:42.936070 kubelet[2634]: I0707 06:04:42.935992 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snnvl\" (UniqueName: \"kubernetes.io/projected/60cd87b2-efcb-4382-98d7-7c7b553d91ee-kube-api-access-snnvl\") pod \"calico-typha-7579c85bff-274w8\" (UID: \"60cd87b2-efcb-4382-98d7-7c7b553d91ee\") " pod="calico-system/calico-typha-7579c85bff-274w8" Jul 7 06:04:42.936070 kubelet[2634]: I0707 06:04:42.936013 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/60cd87b2-efcb-4382-98d7-7c7b553d91ee-typha-certs\") pod \"calico-typha-7579c85bff-274w8\" (UID: \"60cd87b2-efcb-4382-98d7-7c7b553d91ee\") " pod="calico-system/calico-typha-7579c85bff-274w8" Jul 7 06:04:43.142372 kubelet[2634]: I0707 06:04:43.142306 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4fcacf5e-5474-4089-b807-59c7cfee7497-node-certs\") pod \"calico-node-jsbh2\" (UID: \"4fcacf5e-5474-4089-b807-59c7cfee7497\") " pod="calico-system/calico-node-jsbh2" Jul 7 06:04:43.142372 kubelet[2634]: I0707 06:04:43.142363 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4fcacf5e-5474-4089-b807-59c7cfee7497-xtables-lock\") pod \"calico-node-jsbh2\" (UID: \"4fcacf5e-5474-4089-b807-59c7cfee7497\") " pod="calico-system/calico-node-jsbh2" Jul 7 06:04:43.142372 kubelet[2634]: I0707 06:04:43.142382 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4fcacf5e-5474-4089-b807-59c7cfee7497-cni-bin-dir\") pod \"calico-node-jsbh2\" (UID: \"4fcacf5e-5474-4089-b807-59c7cfee7497\") " pod="calico-system/calico-node-jsbh2" Jul 7 06:04:43.142601 kubelet[2634]: I0707 06:04:43.142398 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4fcacf5e-5474-4089-b807-59c7cfee7497-lib-modules\") pod \"calico-node-jsbh2\" (UID: \"4fcacf5e-5474-4089-b807-59c7cfee7497\") " pod="calico-system/calico-node-jsbh2" Jul 7 06:04:43.142601 kubelet[2634]: I0707 06:04:43.142415 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4fcacf5e-5474-4089-b807-59c7cfee7497-var-run-calico\") pod \"calico-node-jsbh2\" (UID: \"4fcacf5e-5474-4089-b807-59c7cfee7497\") " pod="calico-system/calico-node-jsbh2" Jul 7 06:04:43.142601 kubelet[2634]: I0707 06:04:43.142432 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fcacf5e-5474-4089-b807-59c7cfee7497-tigera-ca-bundle\") pod \"calico-node-jsbh2\" (UID: \"4fcacf5e-5474-4089-b807-59c7cfee7497\") " pod="calico-system/calico-node-jsbh2" Jul 7 06:04:43.142601 kubelet[2634]: I0707 06:04:43.142448 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4fcacf5e-5474-4089-b807-59c7cfee7497-cni-net-dir\") pod \"calico-node-jsbh2\" (UID: \"4fcacf5e-5474-4089-b807-59c7cfee7497\") " pod="calico-system/calico-node-jsbh2" Jul 7 06:04:43.142601 kubelet[2634]: I0707 06:04:43.142464 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t5nz\" (UniqueName: \"kubernetes.io/projected/4fcacf5e-5474-4089-b807-59c7cfee7497-kube-api-access-9t5nz\") pod \"calico-node-jsbh2\" (UID: \"4fcacf5e-5474-4089-b807-59c7cfee7497\") " pod="calico-system/calico-node-jsbh2" Jul 7 06:04:43.142718 kubelet[2634]: I0707 06:04:43.142481 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4fcacf5e-5474-4089-b807-59c7cfee7497-policysync\") pod \"calico-node-jsbh2\" (UID: \"4fcacf5e-5474-4089-b807-59c7cfee7497\") " pod="calico-system/calico-node-jsbh2" Jul 7 06:04:43.142718 kubelet[2634]: I0707 06:04:43.142498 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4fcacf5e-5474-4089-b807-59c7cfee7497-cni-log-dir\") pod \"calico-node-jsbh2\" (UID: \"4fcacf5e-5474-4089-b807-59c7cfee7497\") " pod="calico-system/calico-node-jsbh2" Jul 7 06:04:43.142718 kubelet[2634]: I0707 06:04:43.142514 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4fcacf5e-5474-4089-b807-59c7cfee7497-flexvol-driver-host\") pod \"calico-node-jsbh2\" (UID: \"4fcacf5e-5474-4089-b807-59c7cfee7497\") " pod="calico-system/calico-node-jsbh2" Jul 7 06:04:43.142718 kubelet[2634]: I0707 06:04:43.142529 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4fcacf5e-5474-4089-b807-59c7cfee7497-var-lib-calico\") pod \"calico-node-jsbh2\" (UID: \"4fcacf5e-5474-4089-b807-59c7cfee7497\") " pod="calico-system/calico-node-jsbh2" Jul 7 06:04:43.203979 kubelet[2634]: E0707 06:04:43.203299 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:43.204214 containerd[1541]: time="2025-07-07T06:04:43.204024067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7579c85bff-274w8,Uid:60cd87b2-efcb-4382-98d7-7c7b553d91ee,Namespace:calico-system,Attempt:0,}" Jul 7 06:04:43.227258 containerd[1541]: time="2025-07-07T06:04:43.227162450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:04:43.227377 containerd[1541]: time="2025-07-07T06:04:43.227268509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:04:43.227377 containerd[1541]: time="2025-07-07T06:04:43.227295954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:04:43.227497 containerd[1541]: time="2025-07-07T06:04:43.227463024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:04:43.248614 kubelet[2634]: E0707 06:04:43.248574 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.248614 kubelet[2634]: W0707 06:04:43.248598 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.250490 kubelet[2634]: E0707 06:04:43.250459 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.258125 kubelet[2634]: E0707 06:04:43.258041 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.258125 kubelet[2634]: W0707 06:04:43.258061 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.258125 kubelet[2634]: E0707 06:04:43.258089 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.295289 kubelet[2634]: E0707 06:04:43.295020 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4crgb" podUID="9daedc13-d72f-4853-892c-86b97bad3b56" Jul 7 06:04:43.299827 containerd[1541]: time="2025-07-07T06:04:43.299738684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7579c85bff-274w8,Uid:60cd87b2-efcb-4382-98d7-7c7b553d91ee,Namespace:calico-system,Attempt:0,} returns sandbox id \"305accdf51315edd6bb63f00423384ae3d9132e85b2de20263641473c86c985d\"" Jul 7 06:04:43.301089 kubelet[2634]: E0707 06:04:43.301055 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:43.303447 containerd[1541]: time="2025-07-07T06:04:43.303385137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 06:04:43.342914 kubelet[2634]: E0707 06:04:43.342843 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.342914 kubelet[2634]: W0707 06:04:43.342864 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.342914 kubelet[2634]: E0707 06:04:43.342898 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.343100 kubelet[2634]: E0707 06:04:43.343088 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.343100 kubelet[2634]: W0707 06:04:43.343098 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.343150 kubelet[2634]: E0707 06:04:43.343107 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.343437 kubelet[2634]: E0707 06:04:43.343284 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.343437 kubelet[2634]: W0707 06:04:43.343294 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.343437 kubelet[2634]: E0707 06:04:43.343302 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.343540 kubelet[2634]: E0707 06:04:43.343457 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.343540 kubelet[2634]: W0707 06:04:43.343465 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.343540 kubelet[2634]: E0707 06:04:43.343475 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.343776 kubelet[2634]: E0707 06:04:43.343673 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.343776 kubelet[2634]: W0707 06:04:43.343686 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.343776 kubelet[2634]: E0707 06:04:43.343696 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.344150 kubelet[2634]: E0707 06:04:43.344044 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.344150 kubelet[2634]: W0707 06:04:43.344150 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.344220 kubelet[2634]: E0707 06:04:43.344163 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.344397 kubelet[2634]: E0707 06:04:43.344383 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.344433 kubelet[2634]: W0707 06:04:43.344396 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.344433 kubelet[2634]: E0707 06:04:43.344407 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.344576 kubelet[2634]: E0707 06:04:43.344566 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.344603 kubelet[2634]: W0707 06:04:43.344577 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.344603 kubelet[2634]: E0707 06:04:43.344586 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.344791 kubelet[2634]: E0707 06:04:43.344779 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.344791 kubelet[2634]: W0707 06:04:43.344789 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.344854 kubelet[2634]: E0707 06:04:43.344797 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.345959 kubelet[2634]: E0707 06:04:43.345941 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.345959 kubelet[2634]: W0707 06:04:43.345954 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.346011 kubelet[2634]: E0707 06:04:43.345964 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.346141 kubelet[2634]: E0707 06:04:43.346123 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.346141 kubelet[2634]: W0707 06:04:43.346134 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.346189 kubelet[2634]: E0707 06:04:43.346142 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.346315 kubelet[2634]: E0707 06:04:43.346297 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.346315 kubelet[2634]: W0707 06:04:43.346309 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.346368 kubelet[2634]: E0707 06:04:43.346317 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.346540 kubelet[2634]: E0707 06:04:43.346526 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.346540 kubelet[2634]: W0707 06:04:43.346538 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.346598 kubelet[2634]: E0707 06:04:43.346547 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.346718 kubelet[2634]: E0707 06:04:43.346697 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.346718 kubelet[2634]: W0707 06:04:43.346708 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.346718 kubelet[2634]: E0707 06:04:43.346716 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.346862 kubelet[2634]: E0707 06:04:43.346852 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.346885 kubelet[2634]: W0707 06:04:43.346861 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.346885 kubelet[2634]: E0707 06:04:43.346869 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.347020 kubelet[2634]: E0707 06:04:43.347011 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.347041 kubelet[2634]: W0707 06:04:43.347020 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.347041 kubelet[2634]: E0707 06:04:43.347029 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.347204 kubelet[2634]: E0707 06:04:43.347193 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.347204 kubelet[2634]: W0707 06:04:43.347203 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.347248 kubelet[2634]: E0707 06:04:43.347211 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.347380 kubelet[2634]: E0707 06:04:43.347353 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.347380 kubelet[2634]: W0707 06:04:43.347364 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.347380 kubelet[2634]: E0707 06:04:43.347372 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.347516 kubelet[2634]: E0707 06:04:43.347505 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.347516 kubelet[2634]: W0707 06:04:43.347515 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.347563 kubelet[2634]: E0707 06:04:43.347523 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.347662 kubelet[2634]: E0707 06:04:43.347653 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.347694 kubelet[2634]: W0707 06:04:43.347663 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.347694 kubelet[2634]: E0707 06:04:43.347671 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.347959 kubelet[2634]: E0707 06:04:43.347946 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.347959 kubelet[2634]: W0707 06:04:43.347957 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.348030 kubelet[2634]: E0707 06:04:43.347965 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.348030 kubelet[2634]: I0707 06:04:43.347989 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9daedc13-d72f-4853-892c-86b97bad3b56-socket-dir\") pod \"csi-node-driver-4crgb\" (UID: \"9daedc13-d72f-4853-892c-86b97bad3b56\") " pod="calico-system/csi-node-driver-4crgb" Jul 7 06:04:43.348172 kubelet[2634]: E0707 06:04:43.348161 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.348172 kubelet[2634]: W0707 06:04:43.348171 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.348234 kubelet[2634]: E0707 06:04:43.348185 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.348234 kubelet[2634]: I0707 06:04:43.348199 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9daedc13-d72f-4853-892c-86b97bad3b56-kubelet-dir\") pod \"csi-node-driver-4crgb\" (UID: \"9daedc13-d72f-4853-892c-86b97bad3b56\") " pod="calico-system/csi-node-driver-4crgb" Jul 7 06:04:43.348375 kubelet[2634]: E0707 06:04:43.348363 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.348375 kubelet[2634]: W0707 06:04:43.348374 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.348436 kubelet[2634]: E0707 06:04:43.348387 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.348436 kubelet[2634]: I0707 06:04:43.348401 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8vf5\" (UniqueName: \"kubernetes.io/projected/9daedc13-d72f-4853-892c-86b97bad3b56-kube-api-access-f8vf5\") pod \"csi-node-driver-4crgb\" (UID: \"9daedc13-d72f-4853-892c-86b97bad3b56\") " pod="calico-system/csi-node-driver-4crgb" Jul 7 06:04:43.348565 kubelet[2634]: E0707 06:04:43.348555 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.348565 kubelet[2634]: W0707 06:04:43.348565 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.348619 kubelet[2634]: E0707 06:04:43.348577 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.348619 kubelet[2634]: I0707 06:04:43.348591 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9daedc13-d72f-4853-892c-86b97bad3b56-registration-dir\") pod \"csi-node-driver-4crgb\" (UID: \"9daedc13-d72f-4853-892c-86b97bad3b56\") " pod="calico-system/csi-node-driver-4crgb" Jul 7 06:04:43.348752 kubelet[2634]: E0707 06:04:43.348742 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.348752 kubelet[2634]: W0707 06:04:43.348752 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.348814 kubelet[2634]: E0707 06:04:43.348765 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.348814 kubelet[2634]: I0707 06:04:43.348778 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9daedc13-d72f-4853-892c-86b97bad3b56-varrun\") pod \"csi-node-driver-4crgb\" (UID: \"9daedc13-d72f-4853-892c-86b97bad3b56\") " pod="calico-system/csi-node-driver-4crgb" Jul 7 06:04:43.348956 kubelet[2634]: E0707 06:04:43.348943 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.348956 kubelet[2634]: W0707 06:04:43.348956 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.349014 kubelet[2634]: E0707 06:04:43.348968 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.349116 kubelet[2634]: E0707 06:04:43.349106 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.349116 kubelet[2634]: W0707 06:04:43.349115 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.349170 kubelet[2634]: E0707 06:04:43.349127 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.349279 kubelet[2634]: E0707 06:04:43.349268 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.349279 kubelet[2634]: W0707 06:04:43.349277 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.349330 kubelet[2634]: E0707 06:04:43.349288 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.349441 kubelet[2634]: E0707 06:04:43.349432 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.349441 kubelet[2634]: W0707 06:04:43.349441 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.349504 kubelet[2634]: E0707 06:04:43.349486 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.349611 kubelet[2634]: E0707 06:04:43.349598 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.349611 kubelet[2634]: W0707 06:04:43.349609 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.349665 kubelet[2634]: E0707 06:04:43.349651 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.349748 kubelet[2634]: E0707 06:04:43.349737 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.349748 kubelet[2634]: W0707 06:04:43.349746 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.349801 kubelet[2634]: E0707 06:04:43.349785 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.349880 kubelet[2634]: E0707 06:04:43.349870 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.349880 kubelet[2634]: W0707 06:04:43.349878 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.349956 kubelet[2634]: E0707 06:04:43.349935 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.350040 kubelet[2634]: E0707 06:04:43.350027 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.350040 kubelet[2634]: W0707 06:04:43.350036 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.350096 kubelet[2634]: E0707 06:04:43.350046 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.350222 kubelet[2634]: E0707 06:04:43.350211 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.350222 kubelet[2634]: W0707 06:04:43.350221 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.350284 kubelet[2634]: E0707 06:04:43.350229 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.350391 kubelet[2634]: E0707 06:04:43.350378 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.350391 kubelet[2634]: W0707 06:04:43.350389 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.350444 kubelet[2634]: E0707 06:04:43.350397 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.402573 containerd[1541]: time="2025-07-07T06:04:43.402532608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jsbh2,Uid:4fcacf5e-5474-4089-b807-59c7cfee7497,Namespace:calico-system,Attempt:0,}" Jul 7 06:04:43.428554 containerd[1541]: time="2025-07-07T06:04:43.428150194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:04:43.428554 containerd[1541]: time="2025-07-07T06:04:43.428531063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:04:43.428554 containerd[1541]: time="2025-07-07T06:04:43.428543385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:04:43.428724 containerd[1541]: time="2025-07-07T06:04:43.428621039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:04:43.449917 kubelet[2634]: E0707 06:04:43.449642 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.449917 kubelet[2634]: W0707 06:04:43.449663 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.449917 kubelet[2634]: E0707 06:04:43.449683 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.450642 kubelet[2634]: E0707 06:04:43.450614 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.450642 kubelet[2634]: W0707 06:04:43.450631 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.450705 kubelet[2634]: E0707 06:04:43.450648 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.451215 kubelet[2634]: E0707 06:04:43.451199 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.451215 kubelet[2634]: W0707 06:04:43.451215 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.451294 kubelet[2634]: E0707 06:04:43.451232 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.452021 kubelet[2634]: E0707 06:04:43.452004 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.452021 kubelet[2634]: W0707 06:04:43.452018 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.452194 kubelet[2634]: E0707 06:04:43.452136 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.452476 kubelet[2634]: E0707 06:04:43.452459 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.452476 kubelet[2634]: W0707 06:04:43.452473 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.452584 kubelet[2634]: E0707 06:04:43.452559 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.452797 kubelet[2634]: E0707 06:04:43.452781 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.452831 kubelet[2634]: W0707 06:04:43.452800 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.453735 kubelet[2634]: E0707 06:04:43.453713 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.454686 kubelet[2634]: E0707 06:04:43.453994 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.454686 kubelet[2634]: W0707 06:04:43.454008 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.454686 kubelet[2634]: E0707 06:04:43.454133 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.455028 kubelet[2634]: E0707 06:04:43.455014 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.455168 kubelet[2634]: W0707 06:04:43.455083 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.455168 kubelet[2634]: E0707 06:04:43.455147 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.457862 kubelet[2634]: E0707 06:04:43.457843 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.458162 kubelet[2634]: W0707 06:04:43.458091 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.458219 kubelet[2634]: E0707 06:04:43.458201 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.459112 kubelet[2634]: E0707 06:04:43.459043 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.459112 kubelet[2634]: W0707 06:04:43.459061 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.459273 kubelet[2634]: E0707 06:04:43.459216 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.459516 kubelet[2634]: E0707 06:04:43.459492 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.459516 kubelet[2634]: W0707 06:04:43.459504 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.459723 kubelet[2634]: E0707 06:04:43.459698 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.459939 kubelet[2634]: E0707 06:04:43.459775 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.459939 kubelet[2634]: W0707 06:04:43.459783 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.460030 kubelet[2634]: E0707 06:04:43.460017 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.460160 kubelet[2634]: E0707 06:04:43.460115 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.460160 kubelet[2634]: W0707 06:04:43.460124 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.460160 kubelet[2634]: E0707 06:04:43.460144 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.460582 kubelet[2634]: E0707 06:04:43.460527 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.460582 kubelet[2634]: W0707 06:04:43.460541 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.460699 kubelet[2634]: E0707 06:04:43.460630 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.460908 kubelet[2634]: E0707 06:04:43.460865 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.460908 kubelet[2634]: W0707 06:04:43.460876 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.461013 kubelet[2634]: E0707 06:04:43.460936 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.462047 kubelet[2634]: E0707 06:04:43.462015 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.462047 kubelet[2634]: W0707 06:04:43.462032 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.462156 kubelet[2634]: E0707 06:04:43.462121 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.462440 kubelet[2634]: E0707 06:04:43.462400 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.462440 kubelet[2634]: W0707 06:04:43.462434 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.462530 kubelet[2634]: E0707 06:04:43.462515 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.462733 kubelet[2634]: E0707 06:04:43.462707 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.462733 kubelet[2634]: W0707 06:04:43.462722 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.462876 kubelet[2634]: E0707 06:04:43.462831 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.462962 kubelet[2634]: E0707 06:04:43.462950 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.463003 kubelet[2634]: W0707 06:04:43.462962 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.463100 kubelet[2634]: E0707 06:04:43.463054 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.468636 containerd[1541]: time="2025-07-07T06:04:43.468605878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jsbh2,Uid:4fcacf5e-5474-4089-b807-59c7cfee7497,Namespace:calico-system,Attempt:0,} returns sandbox id \"2566ab98c284eddca32cf97b1fcb663d361a286124ae99ae84bb35ba82aadfe4\"" Jul 7 06:04:43.474484 kubelet[2634]: E0707 06:04:43.474464 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.474484 kubelet[2634]: W0707 06:04:43.474481 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.474601 kubelet[2634]: E0707 06:04:43.474579 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.474763 kubelet[2634]: E0707 06:04:43.474751 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.474763 kubelet[2634]: W0707 06:04:43.474763 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.474823 kubelet[2634]: E0707 06:04:43.474813 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.475015 kubelet[2634]: E0707 06:04:43.475004 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.475051 kubelet[2634]: W0707 06:04:43.475016 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.475720 kubelet[2634]: E0707 06:04:43.475096 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.475720 kubelet[2634]: E0707 06:04:43.475307 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.475720 kubelet[2634]: W0707 06:04:43.475318 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.475720 kubelet[2634]: E0707 06:04:43.475440 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.475720 kubelet[2634]: E0707 06:04:43.475524 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.475720 kubelet[2634]: W0707 06:04:43.475531 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.475720 kubelet[2634]: E0707 06:04:43.475544 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.475934 kubelet[2634]: E0707 06:04:43.475730 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.475934 kubelet[2634]: W0707 06:04:43.475752 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.475934 kubelet[2634]: E0707 06:04:43.475763 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:43.484966 kubelet[2634]: E0707 06:04:43.484947 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:43.485074 kubelet[2634]: W0707 06:04:43.485060 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:43.485153 kubelet[2634]: E0707 06:04:43.485123 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:44.219324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount997822792.mount: Deactivated successfully. Jul 7 06:04:44.633990 containerd[1541]: time="2025-07-07T06:04:44.633947316Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:44.634462 containerd[1541]: time="2025-07-07T06:04:44.634427238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 7 06:04:44.635273 containerd[1541]: time="2025-07-07T06:04:44.635237417Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:44.637324 containerd[1541]: time="2025-07-07T06:04:44.637296728Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:44.637981 containerd[1541]: time="2025-07-07T06:04:44.637952800Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.334525896s" Jul 7 06:04:44.638026 containerd[1541]: time="2025-07-07T06:04:44.637983406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 7 06:04:44.639084 containerd[1541]: time="2025-07-07T06:04:44.639056389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 06:04:44.656835 containerd[1541]: time="2025-07-07T06:04:44.656795298Z" level=info msg="CreateContainer within sandbox \"305accdf51315edd6bb63f00423384ae3d9132e85b2de20263641473c86c985d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 06:04:44.675807 containerd[1541]: time="2025-07-07T06:04:44.675748374Z" level=info msg="CreateContainer within sandbox \"305accdf51315edd6bb63f00423384ae3d9132e85b2de20263641473c86c985d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7ca835771989b7e8daa020d872680e0cf9b7e1c12a3edf99b6989be5507d673d\"" Jul 7 06:04:44.676332 containerd[1541]: time="2025-07-07T06:04:44.676296188Z" level=info msg="StartContainer for \"7ca835771989b7e8daa020d872680e0cf9b7e1c12a3edf99b6989be5507d673d\"" Jul 7 06:04:44.738197 containerd[1541]: time="2025-07-07T06:04:44.737010675Z" level=info msg="StartContainer for \"7ca835771989b7e8daa020d872680e0cf9b7e1c12a3edf99b6989be5507d673d\" returns successfully" Jul 7 06:04:45.213929 kubelet[2634]: E0707 06:04:45.213818 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4crgb" podUID="9daedc13-d72f-4853-892c-86b97bad3b56" Jul 7 06:04:45.283119 kubelet[2634]: E0707 06:04:45.283070 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:45.365174 kubelet[2634]: E0707 06:04:45.365124 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.365174 kubelet[2634]: W0707 06:04:45.365150 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.365174 kubelet[2634]: E0707 06:04:45.365170 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.365407 kubelet[2634]: E0707 06:04:45.365378 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.365407 kubelet[2634]: W0707 06:04:45.365392 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.365407 kubelet[2634]: E0707 06:04:45.365401 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.365659 kubelet[2634]: E0707 06:04:45.365617 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.365659 kubelet[2634]: W0707 06:04:45.365630 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.365659 kubelet[2634]: E0707 06:04:45.365642 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.365829 kubelet[2634]: E0707 06:04:45.365802 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.365829 kubelet[2634]: W0707 06:04:45.365814 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.365829 kubelet[2634]: E0707 06:04:45.365823 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.366000 kubelet[2634]: E0707 06:04:45.365989 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.366000 kubelet[2634]: W0707 06:04:45.365999 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.366050 kubelet[2634]: E0707 06:04:45.366008 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.366234 kubelet[2634]: E0707 06:04:45.366177 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.366234 kubelet[2634]: W0707 06:04:45.366187 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.366234 kubelet[2634]: E0707 06:04:45.366195 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.366400 kubelet[2634]: E0707 06:04:45.366342 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.366400 kubelet[2634]: W0707 06:04:45.366350 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.366400 kubelet[2634]: E0707 06:04:45.366358 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.366590 kubelet[2634]: E0707 06:04:45.366544 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.366590 kubelet[2634]: W0707 06:04:45.366551 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.366590 kubelet[2634]: E0707 06:04:45.366559 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.366819 kubelet[2634]: E0707 06:04:45.366796 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.366819 kubelet[2634]: W0707 06:04:45.366815 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.366864 kubelet[2634]: E0707 06:04:45.366824 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.367037 kubelet[2634]: E0707 06:04:45.367023 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.367073 kubelet[2634]: W0707 06:04:45.367036 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.367073 kubelet[2634]: E0707 06:04:45.367058 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.367362 kubelet[2634]: E0707 06:04:45.367344 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.367362 kubelet[2634]: W0707 06:04:45.367361 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.367447 kubelet[2634]: E0707 06:04:45.367374 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.367627 kubelet[2634]: E0707 06:04:45.367601 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.367627 kubelet[2634]: W0707 06:04:45.367613 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.367627 kubelet[2634]: E0707 06:04:45.367623 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.367878 kubelet[2634]: E0707 06:04:45.367865 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.367878 kubelet[2634]: W0707 06:04:45.367877 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.367950 kubelet[2634]: E0707 06:04:45.367910 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.368113 kubelet[2634]: E0707 06:04:45.368098 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.368113 kubelet[2634]: W0707 06:04:45.368111 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.368172 kubelet[2634]: E0707 06:04:45.368120 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.368298 kubelet[2634]: E0707 06:04:45.368286 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.368326 kubelet[2634]: W0707 06:04:45.368300 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.368326 kubelet[2634]: E0707 06:04:45.368308 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.368580 kubelet[2634]: E0707 06:04:45.368554 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.368580 kubelet[2634]: W0707 06:04:45.368571 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.368632 kubelet[2634]: E0707 06:04:45.368585 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.368805 kubelet[2634]: E0707 06:04:45.368795 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.368833 kubelet[2634]: W0707 06:04:45.368805 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.368833 kubelet[2634]: E0707 06:04:45.368825 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.369040 kubelet[2634]: E0707 06:04:45.369026 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.369040 kubelet[2634]: W0707 06:04:45.369037 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.369104 kubelet[2634]: E0707 06:04:45.369051 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.369240 kubelet[2634]: E0707 06:04:45.369227 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.369240 kubelet[2634]: W0707 06:04:45.369237 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.369291 kubelet[2634]: E0707 06:04:45.369249 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.369405 kubelet[2634]: E0707 06:04:45.369395 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.369429 kubelet[2634]: W0707 06:04:45.369405 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.369429 kubelet[2634]: E0707 06:04:45.369415 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.369546 kubelet[2634]: E0707 06:04:45.369538 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.369570 kubelet[2634]: W0707 06:04:45.369546 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.369570 kubelet[2634]: E0707 06:04:45.369557 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.369725 kubelet[2634]: E0707 06:04:45.369715 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.369725 kubelet[2634]: W0707 06:04:45.369724 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.369772 kubelet[2634]: E0707 06:04:45.369735 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.370155 kubelet[2634]: E0707 06:04:45.370144 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.370184 kubelet[2634]: W0707 06:04:45.370156 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.370221 kubelet[2634]: E0707 06:04:45.370202 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.370312 kubelet[2634]: E0707 06:04:45.370302 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.370339 kubelet[2634]: W0707 06:04:45.370312 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.370339 kubelet[2634]: E0707 06:04:45.370332 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.370452 kubelet[2634]: E0707 06:04:45.370442 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.370476 kubelet[2634]: W0707 06:04:45.370451 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.370476 kubelet[2634]: E0707 06:04:45.370463 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.370611 kubelet[2634]: E0707 06:04:45.370601 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.370635 kubelet[2634]: W0707 06:04:45.370611 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.370635 kubelet[2634]: E0707 06:04:45.370619 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.370757 kubelet[2634]: E0707 06:04:45.370747 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.370780 kubelet[2634]: W0707 06:04:45.370757 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.370780 kubelet[2634]: E0707 06:04:45.370769 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.370960 kubelet[2634]: E0707 06:04:45.370948 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.370991 kubelet[2634]: W0707 06:04:45.370961 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.370991 kubelet[2634]: E0707 06:04:45.370973 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.371195 kubelet[2634]: E0707 06:04:45.371180 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.371224 kubelet[2634]: W0707 06:04:45.371196 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.371224 kubelet[2634]: E0707 06:04:45.371212 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.371401 kubelet[2634]: E0707 06:04:45.371374 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.371401 kubelet[2634]: W0707 06:04:45.371383 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.371401 kubelet[2634]: E0707 06:04:45.371396 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.371609 kubelet[2634]: E0707 06:04:45.371558 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.371609 kubelet[2634]: W0707 06:04:45.371568 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.371609 kubelet[2634]: E0707 06:04:45.371581 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.371819 kubelet[2634]: E0707 06:04:45.371793 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.371819 kubelet[2634]: W0707 06:04:45.371809 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.371899 kubelet[2634]: E0707 06:04:45.371824 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.372049 kubelet[2634]: E0707 06:04:45.372022 2634 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:04:45.372049 kubelet[2634]: W0707 06:04:45.372034 2634 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:04:45.372049 kubelet[2634]: E0707 06:04:45.372042 2634 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:04:45.573101 containerd[1541]: time="2025-07-07T06:04:45.573050273Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:45.573849 containerd[1541]: time="2025-07-07T06:04:45.573814198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 7 06:04:45.575486 containerd[1541]: time="2025-07-07T06:04:45.575450305Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:45.579496 containerd[1541]: time="2025-07-07T06:04:45.579411230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:45.591934 containerd[1541]: time="2025-07-07T06:04:45.591875382Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 952.769264ms" Jul 7 06:04:45.592124 containerd[1541]: time="2025-07-07T06:04:45.592030487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 7 06:04:45.594787 containerd[1541]: time="2025-07-07T06:04:45.594706243Z" level=info msg="CreateContainer within sandbox \"2566ab98c284eddca32cf97b1fcb663d361a286124ae99ae84bb35ba82aadfe4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 06:04:45.608055 containerd[1541]: time="2025-07-07T06:04:45.608003651Z" level=info msg="CreateContainer within sandbox \"2566ab98c284eddca32cf97b1fcb663d361a286124ae99ae84bb35ba82aadfe4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"df7346392ecfe13282b91ff99f49435a2a3679aa2fd88f9f6e586331a44058e2\"" Jul 7 06:04:45.609026 containerd[1541]: time="2025-07-07T06:04:45.608996292Z" level=info msg="StartContainer for \"df7346392ecfe13282b91ff99f49435a2a3679aa2fd88f9f6e586331a44058e2\"" Jul 7 06:04:45.660605 containerd[1541]: time="2025-07-07T06:04:45.660554176Z" level=info msg="StartContainer for \"df7346392ecfe13282b91ff99f49435a2a3679aa2fd88f9f6e586331a44058e2\" returns successfully" Jul 7 06:04:45.724844 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df7346392ecfe13282b91ff99f49435a2a3679aa2fd88f9f6e586331a44058e2-rootfs.mount: Deactivated successfully. Jul 7 06:04:45.750802 containerd[1541]: time="2025-07-07T06:04:45.750734715Z" level=info msg="shim disconnected" id=df7346392ecfe13282b91ff99f49435a2a3679aa2fd88f9f6e586331a44058e2 namespace=k8s.io Jul 7 06:04:45.750802 containerd[1541]: time="2025-07-07T06:04:45.750797525Z" level=warning msg="cleaning up after shim disconnected" id=df7346392ecfe13282b91ff99f49435a2a3679aa2fd88f9f6e586331a44058e2 namespace=k8s.io Jul 7 06:04:45.750802 containerd[1541]: time="2025-07-07T06:04:45.750809247Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:04:46.288337 kubelet[2634]: I0707 06:04:46.288300 2634 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:04:46.289063 kubelet[2634]: E0707 06:04:46.289028 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:46.290263 containerd[1541]: time="2025-07-07T06:04:46.290062958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 06:04:46.305674 kubelet[2634]: I0707 06:04:46.305611 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7579c85bff-274w8" podStartSLOduration=2.969627588 podStartE2EDuration="4.305597497s" podCreationTimestamp="2025-07-07 06:04:42 +0000 UTC" firstStartedPulling="2025-07-07 06:04:43.302672049 +0000 UTC m=+19.170525152" lastFinishedPulling="2025-07-07 06:04:44.638641958 +0000 UTC m=+20.506495061" observedRunningTime="2025-07-07 06:04:45.293822121 +0000 UTC m=+21.161675224" watchObservedRunningTime="2025-07-07 06:04:46.305597497 +0000 UTC m=+22.173450600" Jul 7 06:04:47.214133 kubelet[2634]: E0707 06:04:47.214069 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4crgb" podUID="9daedc13-d72f-4853-892c-86b97bad3b56" Jul 7 06:04:48.587883 containerd[1541]: time="2025-07-07T06:04:48.587828419Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:48.588286 containerd[1541]: time="2025-07-07T06:04:48.588226876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 7 06:04:48.589225 containerd[1541]: time="2025-07-07T06:04:48.589192694Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:48.591320 containerd[1541]: time="2025-07-07T06:04:48.591242666Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:48.592406 containerd[1541]: time="2025-07-07T06:04:48.592373307Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.302269583s" Jul 7 06:04:48.592469 containerd[1541]: time="2025-07-07T06:04:48.592406992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 7 06:04:48.604042 containerd[1541]: time="2025-07-07T06:04:48.603990082Z" level=info msg="CreateContainer within sandbox \"2566ab98c284eddca32cf97b1fcb663d361a286124ae99ae84bb35ba82aadfe4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 06:04:48.616055 containerd[1541]: time="2025-07-07T06:04:48.615965269Z" level=info msg="CreateContainer within sandbox \"2566ab98c284eddca32cf97b1fcb663d361a286124ae99ae84bb35ba82aadfe4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"321108c7dc1da37227f5b1c8caec3cf62bdb602c0376a0747a999522f004b508\"" Jul 7 06:04:48.616764 containerd[1541]: time="2025-07-07T06:04:48.616484903Z" level=info msg="StartContainer for \"321108c7dc1da37227f5b1c8caec3cf62bdb602c0376a0747a999522f004b508\"" Jul 7 06:04:48.670370 containerd[1541]: time="2025-07-07T06:04:48.670012650Z" level=info msg="StartContainer for \"321108c7dc1da37227f5b1c8caec3cf62bdb602c0376a0747a999522f004b508\" returns successfully" Jul 7 06:04:49.213983 kubelet[2634]: E0707 06:04:49.213910 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4crgb" podUID="9daedc13-d72f-4853-892c-86b97bad3b56" Jul 7 06:04:49.380771 kubelet[2634]: I0707 06:04:49.380742 2634 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 7 06:04:49.395163 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-321108c7dc1da37227f5b1c8caec3cf62bdb602c0376a0747a999522f004b508-rootfs.mount: Deactivated successfully. Jul 7 06:04:49.402183 containerd[1541]: time="2025-07-07T06:04:49.402057437Z" level=info msg="shim disconnected" id=321108c7dc1da37227f5b1c8caec3cf62bdb602c0376a0747a999522f004b508 namespace=k8s.io Jul 7 06:04:49.402183 containerd[1541]: time="2025-07-07T06:04:49.402181694Z" level=warning msg="cleaning up after shim disconnected" id=321108c7dc1da37227f5b1c8caec3cf62bdb602c0376a0747a999522f004b508 namespace=k8s.io Jul 7 06:04:49.402344 containerd[1541]: time="2025-07-07T06:04:49.402191655Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:04:49.598302 kubelet[2634]: I0707 06:04:49.598138 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9bc4bd25-cee7-4295-98da-f48b4661ae1c-calico-apiserver-certs\") pod \"calico-apiserver-55f9545c55-rznvr\" (UID: \"9bc4bd25-cee7-4295-98da-f48b4661ae1c\") " pod="calico-apiserver/calico-apiserver-55f9545c55-rznvr" Jul 7 06:04:49.598302 kubelet[2634]: I0707 06:04:49.598191 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58b2k\" (UniqueName: \"kubernetes.io/projected/3711a9de-1f07-4eac-8a8a-7a0a57ec740f-kube-api-access-58b2k\") pod \"goldmane-58fd7646b9-qv4x5\" (UID: \"3711a9de-1f07-4eac-8a8a-7a0a57ec740f\") " pod="calico-system/goldmane-58fd7646b9-qv4x5" Jul 7 06:04:49.598302 kubelet[2634]: I0707 06:04:49.598214 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zccx\" (UniqueName: \"kubernetes.io/projected/7bb09144-47e0-49e9-802e-bdaaa9d250da-kube-api-access-5zccx\") pod \"coredns-7c65d6cfc9-pr8nc\" (UID: \"7bb09144-47e0-49e9-802e-bdaaa9d250da\") " pod="kube-system/coredns-7c65d6cfc9-pr8nc" Jul 7 06:04:49.598302 kubelet[2634]: I0707 06:04:49.598236 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk6lk\" (UniqueName: \"kubernetes.io/projected/9bc4bd25-cee7-4295-98da-f48b4661ae1c-kube-api-access-jk6lk\") pod \"calico-apiserver-55f9545c55-rznvr\" (UID: \"9bc4bd25-cee7-4295-98da-f48b4661ae1c\") " pod="calico-apiserver/calico-apiserver-55f9545c55-rznvr" Jul 7 06:04:49.598302 kubelet[2634]: I0707 06:04:49.598255 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c1202884-ce77-4d20-a14f-e7b1c46573d9-config-volume\") pod \"coredns-7c65d6cfc9-dxs49\" (UID: \"c1202884-ce77-4d20-a14f-e7b1c46573d9\") " pod="kube-system/coredns-7c65d6cfc9-dxs49" Jul 7 06:04:49.598973 kubelet[2634]: I0707 06:04:49.598272 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxlhf\" (UniqueName: \"kubernetes.io/projected/c1202884-ce77-4d20-a14f-e7b1c46573d9-kube-api-access-xxlhf\") pod \"coredns-7c65d6cfc9-dxs49\" (UID: \"c1202884-ce77-4d20-a14f-e7b1c46573d9\") " pod="kube-system/coredns-7c65d6cfc9-dxs49" Jul 7 06:04:49.598973 kubelet[2634]: I0707 06:04:49.598300 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrc54\" (UniqueName: \"kubernetes.io/projected/b8274d25-fd41-455e-9bdd-7b60485db03c-kube-api-access-jrc54\") pod \"whisker-855d6c9f76-dt7d4\" (UID: \"b8274d25-fd41-455e-9bdd-7b60485db03c\") " pod="calico-system/whisker-855d6c9f76-dt7d4" Jul 7 06:04:49.598973 kubelet[2634]: I0707 06:04:49.598326 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7bb09144-47e0-49e9-802e-bdaaa9d250da-config-volume\") pod \"coredns-7c65d6cfc9-pr8nc\" (UID: \"7bb09144-47e0-49e9-802e-bdaaa9d250da\") " pod="kube-system/coredns-7c65d6cfc9-pr8nc" Jul 7 06:04:49.598973 kubelet[2634]: I0707 06:04:49.598342 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7lwn\" (UniqueName: \"kubernetes.io/projected/f43cdfb4-d450-43db-b66c-b3c84cc5b1e9-kube-api-access-m7lwn\") pod \"calico-apiserver-55f9545c55-sb8ps\" (UID: \"f43cdfb4-d450-43db-b66c-b3c84cc5b1e9\") " pod="calico-apiserver/calico-apiserver-55f9545c55-sb8ps" Jul 7 06:04:49.598973 kubelet[2634]: I0707 06:04:49.598359 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8274d25-fd41-455e-9bdd-7b60485db03c-whisker-ca-bundle\") pod \"whisker-855d6c9f76-dt7d4\" (UID: \"b8274d25-fd41-455e-9bdd-7b60485db03c\") " pod="calico-system/whisker-855d6c9f76-dt7d4" Jul 7 06:04:49.599138 kubelet[2634]: I0707 06:04:49.598375 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f43cdfb4-d450-43db-b66c-b3c84cc5b1e9-calico-apiserver-certs\") pod \"calico-apiserver-55f9545c55-sb8ps\" (UID: \"f43cdfb4-d450-43db-b66c-b3c84cc5b1e9\") " pod="calico-apiserver/calico-apiserver-55f9545c55-sb8ps" Jul 7 06:04:49.599138 kubelet[2634]: I0707 06:04:49.598394 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddlxr\" (UniqueName: \"kubernetes.io/projected/35fcaab8-d9c9-4023-a99b-c15aee365e80-kube-api-access-ddlxr\") pod \"calico-kube-controllers-56d994cf89-m95g4\" (UID: \"35fcaab8-d9c9-4023-a99b-c15aee365e80\") " pod="calico-system/calico-kube-controllers-56d994cf89-m95g4" Jul 7 06:04:49.599138 kubelet[2634]: I0707 06:04:49.598415 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3711a9de-1f07-4eac-8a8a-7a0a57ec740f-config\") pod \"goldmane-58fd7646b9-qv4x5\" (UID: \"3711a9de-1f07-4eac-8a8a-7a0a57ec740f\") " pod="calico-system/goldmane-58fd7646b9-qv4x5" Jul 7 06:04:49.599138 kubelet[2634]: I0707 06:04:49.598433 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3711a9de-1f07-4eac-8a8a-7a0a57ec740f-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-qv4x5\" (UID: \"3711a9de-1f07-4eac-8a8a-7a0a57ec740f\") " pod="calico-system/goldmane-58fd7646b9-qv4x5" Jul 7 06:04:49.599138 kubelet[2634]: I0707 06:04:49.598450 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3711a9de-1f07-4eac-8a8a-7a0a57ec740f-goldmane-key-pair\") pod \"goldmane-58fd7646b9-qv4x5\" (UID: \"3711a9de-1f07-4eac-8a8a-7a0a57ec740f\") " pod="calico-system/goldmane-58fd7646b9-qv4x5" Jul 7 06:04:49.599248 kubelet[2634]: I0707 06:04:49.598611 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35fcaab8-d9c9-4023-a99b-c15aee365e80-tigera-ca-bundle\") pod \"calico-kube-controllers-56d994cf89-m95g4\" (UID: \"35fcaab8-d9c9-4023-a99b-c15aee365e80\") " pod="calico-system/calico-kube-controllers-56d994cf89-m95g4" Jul 7 06:04:49.599248 kubelet[2634]: I0707 06:04:49.598770 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b8274d25-fd41-455e-9bdd-7b60485db03c-whisker-backend-key-pair\") pod \"whisker-855d6c9f76-dt7d4\" (UID: \"b8274d25-fd41-455e-9bdd-7b60485db03c\") " pod="calico-system/whisker-855d6c9f76-dt7d4" Jul 7 06:04:49.731237 containerd[1541]: time="2025-07-07T06:04:49.731176201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55f9545c55-sb8ps,Uid:f43cdfb4-d450-43db-b66c-b3c84cc5b1e9,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:04:49.733553 kubelet[2634]: E0707 06:04:49.733525 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:49.734144 containerd[1541]: time="2025-07-07T06:04:49.734109441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dxs49,Uid:c1202884-ce77-4d20-a14f-e7b1c46573d9,Namespace:kube-system,Attempt:0,}" Jul 7 06:04:49.734235 containerd[1541]: time="2025-07-07T06:04:49.734217096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-855d6c9f76-dt7d4,Uid:b8274d25-fd41-455e-9bdd-7b60485db03c,Namespace:calico-system,Attempt:0,}" Jul 7 06:04:49.739082 containerd[1541]: time="2025-07-07T06:04:49.739048235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55f9545c55-rznvr,Uid:9bc4bd25-cee7-4295-98da-f48b4661ae1c,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:04:50.028697 kubelet[2634]: E0707 06:04:50.028135 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:04:50.029168 containerd[1541]: time="2025-07-07T06:04:50.029129075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56d994cf89-m95g4,Uid:35fcaab8-d9c9-4023-a99b-c15aee365e80,Namespace:calico-system,Attempt:0,}" Jul 7 06:04:50.038047 containerd[1541]: time="2025-07-07T06:04:50.037014947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pr8nc,Uid:7bb09144-47e0-49e9-802e-bdaaa9d250da,Namespace:kube-system,Attempt:0,}" Jul 7 06:04:50.038161 containerd[1541]: time="2025-07-07T06:04:50.037069874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-qv4x5,Uid:3711a9de-1f07-4eac-8a8a-7a0a57ec740f,Namespace:calico-system,Attempt:0,}" Jul 7 06:04:50.250065 containerd[1541]: time="2025-07-07T06:04:50.250015824Z" level=error msg="Failed to destroy network for sandbox \"4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.250641 containerd[1541]: time="2025-07-07T06:04:50.250606661Z" level=error msg="encountered an error cleaning up failed sandbox \"4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.250768 containerd[1541]: time="2025-07-07T06:04:50.250744119Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56d994cf89-m95g4,Uid:35fcaab8-d9c9-4023-a99b-c15aee365e80,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.250952 containerd[1541]: time="2025-07-07T06:04:50.250840532Z" level=error msg="Failed to destroy network for sandbox \"4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.251202 kubelet[2634]: E0707 06:04:50.251166 2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.251532 containerd[1541]: time="2025-07-07T06:04:50.251201699Z" level=error msg="encountered an error cleaning up failed sandbox \"4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.251532 containerd[1541]: time="2025-07-07T06:04:50.251243624Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55f9545c55-rznvr,Uid:9bc4bd25-cee7-4295-98da-f48b4661ae1c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.252247 kubelet[2634]: E0707 06:04:50.251895 2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.252247 kubelet[2634]: E0707 06:04:50.252227 2634 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55f9545c55-rznvr" Jul 7 06:04:50.252368 kubelet[2634]: E0707 06:04:50.252257 2634 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55f9545c55-rznvr" Jul 7 06:04:50.252368 kubelet[2634]: E0707 06:04:50.252324 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55f9545c55-rznvr_calico-apiserver(9bc4bd25-cee7-4295-98da-f48b4661ae1c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55f9545c55-rznvr_calico-apiserver(9bc4bd25-cee7-4295-98da-f48b4661ae1c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55f9545c55-rznvr" podUID="9bc4bd25-cee7-4295-98da-f48b4661ae1c" Jul 7 06:04:50.253702 kubelet[2634]: E0707 06:04:50.253669 2634 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56d994cf89-m95g4" Jul 7 06:04:50.253928 kubelet[2634]: E0707 06:04:50.253798 2634 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56d994cf89-m95g4" Jul 7 06:04:50.253928 kubelet[2634]: E0707 06:04:50.253858 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-56d994cf89-m95g4_calico-system(35fcaab8-d9c9-4023-a99b-c15aee365e80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-56d994cf89-m95g4_calico-system(35fcaab8-d9c9-4023-a99b-c15aee365e80)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-56d994cf89-m95g4" podUID="35fcaab8-d9c9-4023-a99b-c15aee365e80" Jul 7 06:04:50.254328 containerd[1541]: time="2025-07-07T06:04:50.254281742Z" level=error msg="Failed to destroy network for sandbox \"246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.255786 containerd[1541]: time="2025-07-07T06:04:50.255725451Z" level=error msg="encountered an error cleaning up failed sandbox \"246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.255866 containerd[1541]: time="2025-07-07T06:04:50.255824744Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55f9545c55-sb8ps,Uid:f43cdfb4-d450-43db-b66c-b3c84cc5b1e9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.256218 kubelet[2634]: E0707 06:04:50.256061 2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.256218 kubelet[2634]: E0707 06:04:50.256127 2634 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55f9545c55-sb8ps" Jul 7 06:04:50.256218 kubelet[2634]: E0707 06:04:50.256146 2634 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55f9545c55-sb8ps" Jul 7 06:04:50.256354 kubelet[2634]: E0707 06:04:50.256183 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55f9545c55-sb8ps_calico-apiserver(f43cdfb4-d450-43db-b66c-b3c84cc5b1e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55f9545c55-sb8ps_calico-apiserver(f43cdfb4-d450-43db-b66c-b3c84cc5b1e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55f9545c55-sb8ps" podUID="f43cdfb4-d450-43db-b66c-b3c84cc5b1e9" Jul 7 06:04:50.256416 containerd[1541]: time="2025-07-07T06:04:50.256227277Z" level=error msg="Failed to destroy network for sandbox \"b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.256645 containerd[1541]: time="2025-07-07T06:04:50.256598925Z" level=error msg="encountered an error cleaning up failed sandbox \"b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.256680 containerd[1541]: time="2025-07-07T06:04:50.256657253Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-qv4x5,Uid:3711a9de-1f07-4eac-8a8a-7a0a57ec740f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.256812 kubelet[2634]: E0707 06:04:50.256787 2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.256983 kubelet[2634]: E0707 06:04:50.256965 2634 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-qv4x5" Jul 7 06:04:50.257137 kubelet[2634]: E0707 06:04:50.257050 2634 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-qv4x5" Jul 7 06:04:50.257137 kubelet[2634]: E0707 06:04:50.257102 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-qv4x5_calico-system(3711a9de-1f07-4eac-8a8a-7a0a57ec740f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-qv4x5_calico-system(3711a9de-1f07-4eac-8a8a-7a0a57ec740f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-qv4x5" podUID="3711a9de-1f07-4eac-8a8a-7a0a57ec740f" Jul 7 06:04:50.265010 containerd[1541]: time="2025-07-07T06:04:50.264903252Z" level=error msg="Failed to destroy network for sandbox \"ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.265616 containerd[1541]: time="2025-07-07T06:04:50.265584061Z" level=error msg="encountered an error cleaning up failed sandbox \"ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.266767 containerd[1541]: time="2025-07-07T06:04:50.266664763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-855d6c9f76-dt7d4,Uid:b8274d25-fd41-455e-9bdd-7b60485db03c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.266973 kubelet[2634]: E0707 06:04:50.266938 2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.267031 kubelet[2634]: E0707 06:04:50.267004 2634 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-855d6c9f76-dt7d4" Jul 7 06:04:50.267103 containerd[1541]: time="2025-07-07T06:04:50.266965122Z" level=error msg="Failed to destroy network for sandbox \"133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.267177 kubelet[2634]: E0707 06:04:50.267033 2634 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-855d6c9f76-dt7d4" Jul 7 06:04:50.267177 kubelet[2634]: E0707 06:04:50.267090 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-855d6c9f76-dt7d4_calico-system(b8274d25-fd41-455e-9bdd-7b60485db03c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-855d6c9f76-dt7d4_calico-system(b8274d25-fd41-455e-9bdd-7b60485db03c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-855d6c9f76-dt7d4" podUID="b8274d25-fd41-455e-9bdd-7b60485db03c" Jul 7 06:04:50.267302 containerd[1541]: time="2025-07-07T06:04:50.267277003Z" level=error msg="encountered an error cleaning up failed sandbox \"133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.267358 containerd[1541]: time="2025-07-07T06:04:50.267337811Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dxs49,Uid:c1202884-ce77-4d20-a14f-e7b1c46573d9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.268068 kubelet[2634]: E0707 06:04:50.268034 2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.268143 kubelet[2634]: E0707 06:04:50.268073 2634 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dxs49" Jul 7 06:04:50.268143 kubelet[2634]: E0707 06:04:50.268132 2634 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dxs49" Jul 7 06:04:50.268209 kubelet[2634]: E0707 06:04:50.268164 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dxs49_kube-system(c1202884-ce77-4d20-a14f-e7b1c46573d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dxs49_kube-system(c1202884-ce77-4d20-a14f-e7b1c46573d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dxs49" podUID="c1202884-ce77-4d20-a14f-e7b1c46573d9" Jul 7 06:04:50.272063 containerd[1541]: time="2025-07-07T06:04:50.272015383Z" level=error msg="Failed to destroy network for sandbox \"33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.272345 containerd[1541]: time="2025-07-07T06:04:50.272309141Z" level=error msg="encountered an error cleaning up failed sandbox \"33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.272382 containerd[1541]: time="2025-07-07T06:04:50.272353347Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pr8nc,Uid:7bb09144-47e0-49e9-802e-bdaaa9d250da,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.272550 kubelet[2634]: E0707 06:04:50.272511 2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.272588 kubelet[2634]: E0707 06:04:50.272557 2634 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-pr8nc" Jul 7 06:04:50.272588 kubelet[2634]: E0707 06:04:50.272573 2634 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-pr8nc" Jul 7 06:04:50.272638 kubelet[2634]: E0707 06:04:50.272604 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-pr8nc_kube-system(7bb09144-47e0-49e9-802e-bdaaa9d250da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-pr8nc_kube-system(7bb09144-47e0-49e9-802e-bdaaa9d250da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-pr8nc" podUID="7bb09144-47e0-49e9-802e-bdaaa9d250da" Jul 7 06:04:50.298992 kubelet[2634]: I0707 06:04:50.298833 2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" Jul 7 06:04:50.300282 containerd[1541]: time="2025-07-07T06:04:50.300100818Z" level=info msg="StopPodSandbox for \"133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954\"" Jul 7 06:04:50.300342 containerd[1541]: time="2025-07-07T06:04:50.300289083Z" level=info msg="Ensure that sandbox 133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954 in task-service has been cleanup successfully" Jul 7 06:04:50.301215 kubelet[2634]: I0707 06:04:50.301183 2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" Jul 7 06:04:50.302334 containerd[1541]: time="2025-07-07T06:04:50.301904455Z" level=info msg="StopPodSandbox for \"33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1\"" Jul 7 06:04:50.302334 containerd[1541]: time="2025-07-07T06:04:50.302048673Z" level=info msg="Ensure that sandbox 33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1 in task-service has been cleanup successfully" Jul 7 06:04:50.303207 kubelet[2634]: I0707 06:04:50.303187 2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" Jul 7 06:04:50.304544 containerd[1541]: time="2025-07-07T06:04:50.304410823Z" level=info msg="StopPodSandbox for \"4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c\"" Jul 7 06:04:50.304677 containerd[1541]: time="2025-07-07T06:04:50.304651934Z" level=info msg="Ensure that sandbox 4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c in task-service has been cleanup successfully" Jul 7 06:04:50.305553 kubelet[2634]: I0707 06:04:50.305204 2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" Jul 7 06:04:50.305784 containerd[1541]: time="2025-07-07T06:04:50.305761599Z" level=info msg="StopPodSandbox for \"ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363\"" Jul 7 06:04:50.305950 containerd[1541]: time="2025-07-07T06:04:50.305928221Z" level=info msg="Ensure that sandbox ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363 in task-service has been cleanup successfully" Jul 7 06:04:50.311448 containerd[1541]: time="2025-07-07T06:04:50.311248677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 06:04:50.312782 kubelet[2634]: I0707 06:04:50.312534 2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" Jul 7 06:04:50.314954 containerd[1541]: time="2025-07-07T06:04:50.314911517Z" level=info msg="StopPodSandbox for \"4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966\"" Jul 7 06:04:50.315626 containerd[1541]: time="2025-07-07T06:04:50.315390580Z" level=info msg="Ensure that sandbox 4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966 in task-service has been cleanup successfully" Jul 7 06:04:50.316844 kubelet[2634]: I0707 06:04:50.316786 2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" Jul 7 06:04:50.318021 containerd[1541]: time="2025-07-07T06:04:50.317863343Z" level=info msg="StopPodSandbox for \"b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0\"" Jul 7 06:04:50.318671 containerd[1541]: time="2025-07-07T06:04:50.318368889Z" level=info msg="Ensure that sandbox b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0 in task-service has been cleanup successfully" Jul 7 06:04:50.325133 kubelet[2634]: I0707 06:04:50.325106 2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" Jul 7 06:04:50.326476 containerd[1541]: time="2025-07-07T06:04:50.326440066Z" level=info msg="StopPodSandbox for \"246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0\"" Jul 7 06:04:50.326637 containerd[1541]: time="2025-07-07T06:04:50.326614248Z" level=info msg="Ensure that sandbox 246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0 in task-service has been cleanup successfully" Jul 7 06:04:50.352950 containerd[1541]: time="2025-07-07T06:04:50.352871325Z" level=error msg="StopPodSandbox for \"4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c\" failed" error="failed to destroy network for sandbox \"4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.353380 kubelet[2634]: E0707 06:04:50.353180 2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" Jul 7 06:04:50.353380 kubelet[2634]: E0707 06:04:50.353250 2634 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c"} Jul 7 06:04:50.353380 kubelet[2634]: E0707 06:04:50.353320 2634 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"35fcaab8-d9c9-4023-a99b-c15aee365e80\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:04:50.353380 kubelet[2634]: E0707 06:04:50.353342 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"35fcaab8-d9c9-4023-a99b-c15aee365e80\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-56d994cf89-m95g4" podUID="35fcaab8-d9c9-4023-a99b-c15aee365e80" Jul 7 06:04:50.358262 containerd[1541]: time="2025-07-07T06:04:50.358216224Z" level=error msg="StopPodSandbox for \"33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1\" failed" error="failed to destroy network for sandbox \"33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.358483 kubelet[2634]: E0707 06:04:50.358441 2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" Jul 7 06:04:50.358989 kubelet[2634]: E0707 06:04:50.358953 2634 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1"} Jul 7 06:04:50.359056 kubelet[2634]: E0707 06:04:50.359012 2634 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7bb09144-47e0-49e9-802e-bdaaa9d250da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:04:50.359056 kubelet[2634]: E0707 06:04:50.359034 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7bb09144-47e0-49e9-802e-bdaaa9d250da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-pr8nc" podUID="7bb09144-47e0-49e9-802e-bdaaa9d250da" Jul 7 06:04:50.366127 containerd[1541]: time="2025-07-07T06:04:50.366080494Z" level=error msg="StopPodSandbox for \"4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966\" failed" error="failed to destroy network for sandbox \"4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.366563 kubelet[2634]: E0707 06:04:50.366420 2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" Jul 7 06:04:50.366563 kubelet[2634]: E0707 06:04:50.366470 2634 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966"} Jul 7 06:04:50.366563 kubelet[2634]: E0707 06:04:50.366506 2634 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9bc4bd25-cee7-4295-98da-f48b4661ae1c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:04:50.366563 kubelet[2634]: E0707 06:04:50.366532 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9bc4bd25-cee7-4295-98da-f48b4661ae1c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55f9545c55-rznvr" podUID="9bc4bd25-cee7-4295-98da-f48b4661ae1c" Jul 7 06:04:50.367743 containerd[1541]: time="2025-07-07T06:04:50.367704826Z" level=error msg="StopPodSandbox for \"133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954\" failed" error="failed to destroy network for sandbox \"133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.367922 kubelet[2634]: E0707 06:04:50.367867 2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" Jul 7 06:04:50.367974 kubelet[2634]: E0707 06:04:50.367930 2634 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954"} Jul 7 06:04:50.367974 kubelet[2634]: E0707 06:04:50.367957 2634 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c1202884-ce77-4d20-a14f-e7b1c46573d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:04:50.368039 kubelet[2634]: E0707 06:04:50.367975 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c1202884-ce77-4d20-a14f-e7b1c46573d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dxs49" podUID="c1202884-ce77-4d20-a14f-e7b1c46573d9" Jul 7 06:04:50.369184 containerd[1541]: time="2025-07-07T06:04:50.369141374Z" level=error msg="StopPodSandbox for \"ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363\" failed" error="failed to destroy network for sandbox \"ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.369377 kubelet[2634]: E0707 06:04:50.369345 2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" Jul 7 06:04:50.369427 kubelet[2634]: E0707 06:04:50.369382 2634 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363"} Jul 7 06:04:50.369427 kubelet[2634]: E0707 06:04:50.369417 2634 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b8274d25-fd41-455e-9bdd-7b60485db03c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:04:50.369513 kubelet[2634]: E0707 06:04:50.369438 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b8274d25-fd41-455e-9bdd-7b60485db03c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-855d6c9f76-dt7d4" podUID="b8274d25-fd41-455e-9bdd-7b60485db03c" Jul 7 06:04:50.373072 containerd[1541]: time="2025-07-07T06:04:50.373029683Z" level=error msg="StopPodSandbox for \"b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0\" failed" error="failed to destroy network for sandbox \"b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.373300 kubelet[2634]: E0707 06:04:50.373262 2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" Jul 7 06:04:50.373349 kubelet[2634]: E0707 06:04:50.373305 2634 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0"} Jul 7 06:04:50.373349 kubelet[2634]: E0707 06:04:50.373335 2634 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3711a9de-1f07-4eac-8a8a-7a0a57ec740f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:04:50.373432 kubelet[2634]: E0707 06:04:50.373356 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3711a9de-1f07-4eac-8a8a-7a0a57ec740f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-qv4x5" podUID="3711a9de-1f07-4eac-8a8a-7a0a57ec740f" Jul 7 06:04:50.374478 containerd[1541]: time="2025-07-07T06:04:50.374440308Z" level=error msg="StopPodSandbox for \"246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0\" failed" error="failed to destroy network for sandbox \"246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:50.374613 kubelet[2634]: E0707 06:04:50.374588 2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" Jul 7 06:04:50.374652 kubelet[2634]: E0707 06:04:50.374619 2634 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0"} Jul 7 06:04:50.374652 kubelet[2634]: E0707 06:04:50.374644 2634 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f43cdfb4-d450-43db-b66c-b3c84cc5b1e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:04:50.374719 kubelet[2634]: E0707 06:04:50.374661 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f43cdfb4-d450-43db-b66c-b3c84cc5b1e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55f9545c55-sb8ps" podUID="f43cdfb4-d450-43db-b66c-b3c84cc5b1e9" Jul 7 06:04:51.215593 containerd[1541]: time="2025-07-07T06:04:51.215553918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4crgb,Uid:9daedc13-d72f-4853-892c-86b97bad3b56,Namespace:calico-system,Attempt:0,}" Jul 7 06:04:51.271281 containerd[1541]: time="2025-07-07T06:04:51.271232631Z" level=error msg="Failed to destroy network for sandbox \"bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:51.271579 containerd[1541]: time="2025-07-07T06:04:51.271536589Z" level=error msg="encountered an error cleaning up failed sandbox \"bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:51.271612 containerd[1541]: time="2025-07-07T06:04:51.271584075Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4crgb,Uid:9daedc13-d72f-4853-892c-86b97bad3b56,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:51.271827 kubelet[2634]: E0707 06:04:51.271774 2634 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:51.272099 kubelet[2634]: E0707 06:04:51.271838 2634 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4crgb" Jul 7 06:04:51.272099 kubelet[2634]: E0707 06:04:51.271858 2634 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4crgb" Jul 7 06:04:51.272099 kubelet[2634]: E0707 06:04:51.271908 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4crgb_calico-system(9daedc13-d72f-4853-892c-86b97bad3b56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4crgb_calico-system(9daedc13-d72f-4853-892c-86b97bad3b56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4crgb" podUID="9daedc13-d72f-4853-892c-86b97bad3b56" Jul 7 06:04:51.275075 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516-shm.mount: Deactivated successfully. Jul 7 06:04:51.327323 kubelet[2634]: I0707 06:04:51.327295 2634 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" Jul 7 06:04:51.329152 containerd[1541]: time="2025-07-07T06:04:51.329120942Z" level=info msg="StopPodSandbox for \"bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516\"" Jul 7 06:04:51.329306 containerd[1541]: time="2025-07-07T06:04:51.329284323Z" level=info msg="Ensure that sandbox bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516 in task-service has been cleanup successfully" Jul 7 06:04:51.351153 containerd[1541]: time="2025-07-07T06:04:51.351106864Z" level=error msg="StopPodSandbox for \"bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516\" failed" error="failed to destroy network for sandbox \"bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:04:51.351338 kubelet[2634]: E0707 06:04:51.351306 2634 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" Jul 7 06:04:51.351379 kubelet[2634]: E0707 06:04:51.351350 2634 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516"} Jul 7 06:04:51.351402 kubelet[2634]: E0707 06:04:51.351388 2634 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9daedc13-d72f-4853-892c-86b97bad3b56\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:04:51.351454 kubelet[2634]: E0707 06:04:51.351408 2634 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9daedc13-d72f-4853-892c-86b97bad3b56\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4crgb" podUID="9daedc13-d72f-4853-892c-86b97bad3b56" Jul 7 06:04:54.381786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1535458265.mount: Deactivated successfully. Jul 7 06:04:54.627720 containerd[1541]: time="2025-07-07T06:04:54.627670815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:54.629161 containerd[1541]: time="2025-07-07T06:04:54.629131378Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 7 06:04:54.630034 containerd[1541]: time="2025-07-07T06:04:54.630006236Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:54.632681 containerd[1541]: time="2025-07-07T06:04:54.632574842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:54.644210 containerd[1541]: time="2025-07-07T06:04:54.643120100Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.331828817s" Jul 7 06:04:54.644210 containerd[1541]: time="2025-07-07T06:04:54.643163705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 7 06:04:54.663256 containerd[1541]: time="2025-07-07T06:04:54.663214224Z" level=info msg="CreateContainer within sandbox \"2566ab98c284eddca32cf97b1fcb663d361a286124ae99ae84bb35ba82aadfe4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 06:04:54.678202 containerd[1541]: time="2025-07-07T06:04:54.678153493Z" level=info msg="CreateContainer within sandbox \"2566ab98c284eddca32cf97b1fcb663d361a286124ae99ae84bb35ba82aadfe4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bfefce08f4cba35180ecc8c2d12b63372058f2eb7eacc09676c0cb9091e5b1f3\"" Jul 7 06:04:54.679656 containerd[1541]: time="2025-07-07T06:04:54.679008788Z" level=info msg="StartContainer for \"bfefce08f4cba35180ecc8c2d12b63372058f2eb7eacc09676c0cb9091e5b1f3\"" Jul 7 06:04:54.755710 containerd[1541]: time="2025-07-07T06:04:54.755607863Z" level=info msg="StartContainer for \"bfefce08f4cba35180ecc8c2d12b63372058f2eb7eacc09676c0cb9091e5b1f3\" returns successfully" Jul 7 06:04:55.009762 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 06:04:55.009847 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 06:04:55.142830 containerd[1541]: time="2025-07-07T06:04:55.142480735Z" level=info msg="StopPodSandbox for \"ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363\"" Jul 7 06:04:55.449394 containerd[1541]: 2025-07-07 06:04:55.276 [INFO][3909] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" Jul 7 06:04:55.449394 containerd[1541]: 2025-07-07 06:04:55.278 [INFO][3909] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" iface="eth0" netns="/var/run/netns/cni-508e0092-246a-cdbf-f9f2-41754a5dd653" Jul 7 06:04:55.449394 containerd[1541]: 2025-07-07 06:04:55.279 [INFO][3909] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" iface="eth0" netns="/var/run/netns/cni-508e0092-246a-cdbf-f9f2-41754a5dd653" Jul 7 06:04:55.449394 containerd[1541]: 2025-07-07 06:04:55.279 [INFO][3909] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" iface="eth0" netns="/var/run/netns/cni-508e0092-246a-cdbf-f9f2-41754a5dd653" Jul 7 06:04:55.449394 containerd[1541]: 2025-07-07 06:04:55.279 [INFO][3909] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" Jul 7 06:04:55.449394 containerd[1541]: 2025-07-07 06:04:55.279 [INFO][3909] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" Jul 7 06:04:55.449394 containerd[1541]: 2025-07-07 06:04:55.434 [INFO][3925] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" HandleID="k8s-pod-network.ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" Workload="localhost-k8s-whisker--855d6c9f76--dt7d4-eth0" Jul 7 06:04:55.449394 containerd[1541]: 2025-07-07 06:04:55.434 [INFO][3925] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:04:55.449394 containerd[1541]: 2025-07-07 06:04:55.434 [INFO][3925] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:04:55.449394 containerd[1541]: 2025-07-07 06:04:55.443 [WARNING][3925] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" HandleID="k8s-pod-network.ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" Workload="localhost-k8s-whisker--855d6c9f76--dt7d4-eth0" Jul 7 06:04:55.449394 containerd[1541]: 2025-07-07 06:04:55.443 [INFO][3925] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" HandleID="k8s-pod-network.ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" Workload="localhost-k8s-whisker--855d6c9f76--dt7d4-eth0" Jul 7 06:04:55.449394 containerd[1541]: 2025-07-07 06:04:55.445 [INFO][3925] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:04:55.449394 containerd[1541]: 2025-07-07 06:04:55.447 [INFO][3909] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" Jul 7 06:04:55.450282 containerd[1541]: time="2025-07-07T06:04:55.449922179Z" level=info msg="TearDown network for sandbox \"ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363\" successfully" Jul 7 06:04:55.450282 containerd[1541]: time="2025-07-07T06:04:55.449958903Z" level=info msg="StopPodSandbox for \"ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363\" returns successfully" Jul 7 06:04:55.452051 systemd[1]: run-netns-cni\x2d508e0092\x2d246a\x2dcdbf\x2df9f2\x2d41754a5dd653.mount: Deactivated successfully. Jul 7 06:04:55.647594 kubelet[2634]: I0707 06:04:55.647196 2634 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrc54\" (UniqueName: \"kubernetes.io/projected/b8274d25-fd41-455e-9bdd-7b60485db03c-kube-api-access-jrc54\") pod \"b8274d25-fd41-455e-9bdd-7b60485db03c\" (UID: \"b8274d25-fd41-455e-9bdd-7b60485db03c\") " Jul 7 06:04:55.647594 kubelet[2634]: I0707 06:04:55.647245 2634 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b8274d25-fd41-455e-9bdd-7b60485db03c-whisker-backend-key-pair\") pod \"b8274d25-fd41-455e-9bdd-7b60485db03c\" (UID: \"b8274d25-fd41-455e-9bdd-7b60485db03c\") " Jul 7 06:04:55.647594 kubelet[2634]: I0707 06:04:55.647272 2634 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8274d25-fd41-455e-9bdd-7b60485db03c-whisker-ca-bundle\") pod \"b8274d25-fd41-455e-9bdd-7b60485db03c\" (UID: \"b8274d25-fd41-455e-9bdd-7b60485db03c\") " Jul 7 06:04:55.651828 systemd[1]: var-lib-kubelet-pods-b8274d25\x2dfd41\x2d455e\x2d9bdd\x2d7b60485db03c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djrc54.mount: Deactivated successfully. Jul 7 06:04:55.652572 kubelet[2634]: I0707 06:04:55.652434 2634 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8274d25-fd41-455e-9bdd-7b60485db03c-kube-api-access-jrc54" (OuterVolumeSpecName: "kube-api-access-jrc54") pod "b8274d25-fd41-455e-9bdd-7b60485db03c" (UID: "b8274d25-fd41-455e-9bdd-7b60485db03c"). InnerVolumeSpecName "kube-api-access-jrc54". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 06:04:55.654865 kubelet[2634]: I0707 06:04:55.654832 2634 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8274d25-fd41-455e-9bdd-7b60485db03c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b8274d25-fd41-455e-9bdd-7b60485db03c" (UID: "b8274d25-fd41-455e-9bdd-7b60485db03c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 06:04:55.665393 systemd[1]: var-lib-kubelet-pods-b8274d25\x2dfd41\x2d455e\x2d9bdd\x2d7b60485db03c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 06:04:55.666341 kubelet[2634]: I0707 06:04:55.666287 2634 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8274d25-fd41-455e-9bdd-7b60485db03c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b8274d25-fd41-455e-9bdd-7b60485db03c" (UID: "b8274d25-fd41-455e-9bdd-7b60485db03c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 7 06:04:55.748204 kubelet[2634]: I0707 06:04:55.748072 2634 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrc54\" (UniqueName: \"kubernetes.io/projected/b8274d25-fd41-455e-9bdd-7b60485db03c-kube-api-access-jrc54\") on node \"localhost\" DevicePath \"\"" Jul 7 06:04:55.748204 kubelet[2634]: I0707 06:04:55.748109 2634 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b8274d25-fd41-455e-9bdd-7b60485db03c-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 7 06:04:55.748204 kubelet[2634]: I0707 06:04:55.748119 2634 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8274d25-fd41-455e-9bdd-7b60485db03c-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 7 06:04:56.349180 kubelet[2634]: I0707 06:04:56.349096 2634 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:04:56.363294 kubelet[2634]: I0707 06:04:56.363120 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jsbh2" podStartSLOduration=2.182515856 podStartE2EDuration="13.363101986s" podCreationTimestamp="2025-07-07 06:04:43 +0000 UTC" firstStartedPulling="2025-07-07 06:04:43.469955599 +0000 UTC m=+19.337808702" lastFinishedPulling="2025-07-07 06:04:54.650541729 +0000 UTC m=+30.518394832" observedRunningTime="2025-07-07 06:04:55.36313824 +0000 UTC m=+31.230991343" watchObservedRunningTime="2025-07-07 06:04:56.363101986 +0000 UTC m=+32.230955089" Jul 7 06:04:56.553173 kubelet[2634]: I0707 06:04:56.552712 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac85e059-b718-4b49-b9f6-83ff4be681d7-whisker-ca-bundle\") pod \"whisker-649b8769d8-lqt6b\" (UID: \"ac85e059-b718-4b49-b9f6-83ff4be681d7\") " pod="calico-system/whisker-649b8769d8-lqt6b" Jul 7 06:04:56.553173 kubelet[2634]: I0707 06:04:56.553020 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8nft\" (UniqueName: \"kubernetes.io/projected/ac85e059-b718-4b49-b9f6-83ff4be681d7-kube-api-access-b8nft\") pod \"whisker-649b8769d8-lqt6b\" (UID: \"ac85e059-b718-4b49-b9f6-83ff4be681d7\") " pod="calico-system/whisker-649b8769d8-lqt6b" Jul 7 06:04:56.553173 kubelet[2634]: I0707 06:04:56.553095 2634 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ac85e059-b718-4b49-b9f6-83ff4be681d7-whisker-backend-key-pair\") pod \"whisker-649b8769d8-lqt6b\" (UID: \"ac85e059-b718-4b49-b9f6-83ff4be681d7\") " pod="calico-system/whisker-649b8769d8-lqt6b" Jul 7 06:04:56.749057 containerd[1541]: time="2025-07-07T06:04:56.748953676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-649b8769d8-lqt6b,Uid:ac85e059-b718-4b49-b9f6-83ff4be681d7,Namespace:calico-system,Attempt:0,}" Jul 7 06:04:56.976909 systemd-networkd[1228]: cali8ed0e51c20d: Link UP Jul 7 06:04:56.977654 systemd-networkd[1228]: cali8ed0e51c20d: Gained carrier Jul 7 06:04:56.991270 containerd[1541]: 2025-07-07 06:04:56.887 [INFO][4097] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:04:56.991270 containerd[1541]: 2025-07-07 06:04:56.900 [INFO][4097] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--649b8769d8--lqt6b-eth0 whisker-649b8769d8- calico-system ac85e059-b718-4b49-b9f6-83ff4be681d7 939 0 2025-07-07 06:04:56 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:649b8769d8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-649b8769d8-lqt6b eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8ed0e51c20d [] [] }} ContainerID="c8f7d915b15e279c7ee1a30c66069beb167935089d19af0574abed85eb347466" Namespace="calico-system" Pod="whisker-649b8769d8-lqt6b" WorkloadEndpoint="localhost-k8s-whisker--649b8769d8--lqt6b-" Jul 7 06:04:56.991270 containerd[1541]: 2025-07-07 06:04:56.900 [INFO][4097] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c8f7d915b15e279c7ee1a30c66069beb167935089d19af0574abed85eb347466" Namespace="calico-system" Pod="whisker-649b8769d8-lqt6b" WorkloadEndpoint="localhost-k8s-whisker--649b8769d8--lqt6b-eth0" Jul 7 06:04:56.991270 containerd[1541]: 2025-07-07 06:04:56.925 [INFO][4111] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8f7d915b15e279c7ee1a30c66069beb167935089d19af0574abed85eb347466" HandleID="k8s-pod-network.c8f7d915b15e279c7ee1a30c66069beb167935089d19af0574abed85eb347466" Workload="localhost-k8s-whisker--649b8769d8--lqt6b-eth0" Jul 7 06:04:56.991270 containerd[1541]: 2025-07-07 06:04:56.926 [INFO][4111] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c8f7d915b15e279c7ee1a30c66069beb167935089d19af0574abed85eb347466" HandleID="k8s-pod-network.c8f7d915b15e279c7ee1a30c66069beb167935089d19af0574abed85eb347466" Workload="localhost-k8s-whisker--649b8769d8--lqt6b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd8d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-649b8769d8-lqt6b", "timestamp":"2025-07-07 06:04:56.925894081 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:04:56.991270 containerd[1541]: 2025-07-07 06:04:56.926 [INFO][4111] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:04:56.991270 containerd[1541]: 2025-07-07 06:04:56.926 [INFO][4111] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:04:56.991270 containerd[1541]: 2025-07-07 06:04:56.926 [INFO][4111] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:04:56.991270 containerd[1541]: 2025-07-07 06:04:56.936 [INFO][4111] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c8f7d915b15e279c7ee1a30c66069beb167935089d19af0574abed85eb347466" host="localhost" Jul 7 06:04:56.991270 containerd[1541]: 2025-07-07 06:04:56.949 [INFO][4111] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:04:56.991270 containerd[1541]: 2025-07-07 06:04:56.952 [INFO][4111] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:04:56.991270 containerd[1541]: 2025-07-07 06:04:56.954 [INFO][4111] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:04:56.991270 containerd[1541]: 2025-07-07 06:04:56.956 [INFO][4111] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:04:56.991270 containerd[1541]: 2025-07-07 06:04:56.956 [INFO][4111] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c8f7d915b15e279c7ee1a30c66069beb167935089d19af0574abed85eb347466" host="localhost" Jul 7 06:04:56.991270 containerd[1541]: 2025-07-07 06:04:56.957 [INFO][4111] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c8f7d915b15e279c7ee1a30c66069beb167935089d19af0574abed85eb347466 Jul 7 06:04:56.991270 containerd[1541]: 2025-07-07 06:04:56.960 [INFO][4111] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c8f7d915b15e279c7ee1a30c66069beb167935089d19af0574abed85eb347466" host="localhost" Jul 7 06:04:56.991270 containerd[1541]: 2025-07-07 06:04:56.965 [INFO][4111] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.c8f7d915b15e279c7ee1a30c66069beb167935089d19af0574abed85eb347466" host="localhost" Jul 7 06:04:56.991270 containerd[1541]: 2025-07-07 06:04:56.965 [INFO][4111] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.c8f7d915b15e279c7ee1a30c66069beb167935089d19af0574abed85eb347466" host="localhost" Jul 7 06:04:56.991270 containerd[1541]: 2025-07-07 06:04:56.965 [INFO][4111] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:04:56.991270 containerd[1541]: 2025-07-07 06:04:56.965 [INFO][4111] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="c8f7d915b15e279c7ee1a30c66069beb167935089d19af0574abed85eb347466" HandleID="k8s-pod-network.c8f7d915b15e279c7ee1a30c66069beb167935089d19af0574abed85eb347466" Workload="localhost-k8s-whisker--649b8769d8--lqt6b-eth0" Jul 7 06:04:56.991958 containerd[1541]: 2025-07-07 06:04:56.967 [INFO][4097] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c8f7d915b15e279c7ee1a30c66069beb167935089d19af0574abed85eb347466" Namespace="calico-system" Pod="whisker-649b8769d8-lqt6b" WorkloadEndpoint="localhost-k8s-whisker--649b8769d8--lqt6b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--649b8769d8--lqt6b-eth0", GenerateName:"whisker-649b8769d8-", Namespace:"calico-system", SelfLink:"", UID:"ac85e059-b718-4b49-b9f6-83ff4be681d7", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"649b8769d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-649b8769d8-lqt6b", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8ed0e51c20d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:04:56.991958 containerd[1541]: 2025-07-07 06:04:56.968 [INFO][4097] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="c8f7d915b15e279c7ee1a30c66069beb167935089d19af0574abed85eb347466" Namespace="calico-system" Pod="whisker-649b8769d8-lqt6b" WorkloadEndpoint="localhost-k8s-whisker--649b8769d8--lqt6b-eth0" Jul 7 06:04:56.991958 containerd[1541]: 2025-07-07 06:04:56.968 [INFO][4097] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ed0e51c20d ContainerID="c8f7d915b15e279c7ee1a30c66069beb167935089d19af0574abed85eb347466" Namespace="calico-system" Pod="whisker-649b8769d8-lqt6b" WorkloadEndpoint="localhost-k8s-whisker--649b8769d8--lqt6b-eth0" Jul 7 06:04:56.991958 containerd[1541]: 2025-07-07 06:04:56.978 [INFO][4097] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8f7d915b15e279c7ee1a30c66069beb167935089d19af0574abed85eb347466" Namespace="calico-system" Pod="whisker-649b8769d8-lqt6b" WorkloadEndpoint="localhost-k8s-whisker--649b8769d8--lqt6b-eth0" Jul 7 06:04:56.991958 containerd[1541]: 2025-07-07 06:04:56.978 [INFO][4097] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c8f7d915b15e279c7ee1a30c66069beb167935089d19af0574abed85eb347466" Namespace="calico-system" Pod="whisker-649b8769d8-lqt6b" WorkloadEndpoint="localhost-k8s-whisker--649b8769d8--lqt6b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--649b8769d8--lqt6b-eth0", GenerateName:"whisker-649b8769d8-", Namespace:"calico-system", SelfLink:"", UID:"ac85e059-b718-4b49-b9f6-83ff4be681d7", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"649b8769d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8f7d915b15e279c7ee1a30c66069beb167935089d19af0574abed85eb347466", Pod:"whisker-649b8769d8-lqt6b", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8ed0e51c20d", MAC:"8a:12:5e:5f:ed:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:04:56.991958 containerd[1541]: 2025-07-07 06:04:56.988 [INFO][4097] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c8f7d915b15e279c7ee1a30c66069beb167935089d19af0574abed85eb347466" Namespace="calico-system" Pod="whisker-649b8769d8-lqt6b" WorkloadEndpoint="localhost-k8s-whisker--649b8769d8--lqt6b-eth0" Jul 7 06:04:57.005587 containerd[1541]: time="2025-07-07T06:04:57.005437481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:04:57.005587 containerd[1541]: time="2025-07-07T06:04:57.005491607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:04:57.005587 containerd[1541]: time="2025-07-07T06:04:57.005511569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:04:57.006472 containerd[1541]: time="2025-07-07T06:04:57.006425140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:04:57.034123 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:04:57.063935 containerd[1541]: time="2025-07-07T06:04:57.063856456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-649b8769d8-lqt6b,Uid:ac85e059-b718-4b49-b9f6-83ff4be681d7,Namespace:calico-system,Attempt:0,} returns sandbox id \"c8f7d915b15e279c7ee1a30c66069beb167935089d19af0574abed85eb347466\"" Jul 7 06:04:57.065586 containerd[1541]: time="2025-07-07T06:04:57.065567107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 06:04:57.973931 containerd[1541]: time="2025-07-07T06:04:57.973845531Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:57.974408 containerd[1541]: time="2025-07-07T06:04:57.974322059Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 7 06:04:57.975209 containerd[1541]: time="2025-07-07T06:04:57.975181705Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:57.978388 containerd[1541]: time="2025-07-07T06:04:57.978335381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:57.979264 containerd[1541]: time="2025-07-07T06:04:57.979227750Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 913.6306ms" Jul 7 06:04:57.979453 containerd[1541]: time="2025-07-07T06:04:57.979350043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 7 06:04:57.981593 containerd[1541]: time="2025-07-07T06:04:57.981557184Z" level=info msg="CreateContainer within sandbox \"c8f7d915b15e279c7ee1a30c66069beb167935089d19af0574abed85eb347466\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 06:04:57.992651 containerd[1541]: time="2025-07-07T06:04:57.992601411Z" level=info msg="CreateContainer within sandbox \"c8f7d915b15e279c7ee1a30c66069beb167935089d19af0574abed85eb347466\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"9664ea265fd6ae8249c0c2fd80aebb084441c3d83ccc6b1ec0852e0d30f847e9\"" Jul 7 06:04:57.993350 containerd[1541]: time="2025-07-07T06:04:57.993325803Z" level=info msg="StartContainer for \"9664ea265fd6ae8249c0c2fd80aebb084441c3d83ccc6b1ec0852e0d30f847e9\"" Jul 7 06:04:58.049301 containerd[1541]: time="2025-07-07T06:04:58.047211328Z" level=info msg="StartContainer for \"9664ea265fd6ae8249c0c2fd80aebb084441c3d83ccc6b1ec0852e0d30f847e9\" returns successfully" Jul 7 06:04:58.050699 containerd[1541]: time="2025-07-07T06:04:58.050668103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 06:04:58.216078 kubelet[2634]: I0707 06:04:58.216029 2634 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8274d25-fd41-455e-9bdd-7b60485db03c" path="/var/lib/kubelet/pods/b8274d25-fd41-455e-9bdd-7b60485db03c/volumes" Jul 7 06:04:58.949173 systemd-networkd[1228]: cali8ed0e51c20d: Gained IPv6LL Jul 7 06:04:59.335405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3067019014.mount: Deactivated successfully. Jul 7 06:04:59.388907 containerd[1541]: time="2025-07-07T06:04:59.388445302Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:59.388907 containerd[1541]: time="2025-07-07T06:04:59.388841179Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 7 06:04:59.389735 containerd[1541]: time="2025-07-07T06:04:59.389683978Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:59.391927 containerd[1541]: time="2025-07-07T06:04:59.391868742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:04:59.392792 containerd[1541]: time="2025-07-07T06:04:59.392716382Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.342008275s" Jul 7 06:04:59.392792 containerd[1541]: time="2025-07-07T06:04:59.392746305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 7 06:04:59.395929 containerd[1541]: time="2025-07-07T06:04:59.395030799Z" level=info msg="CreateContainer within sandbox \"c8f7d915b15e279c7ee1a30c66069beb167935089d19af0574abed85eb347466\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 06:04:59.406122 containerd[1541]: time="2025-07-07T06:04:59.406083875Z" level=info msg="CreateContainer within sandbox \"c8f7d915b15e279c7ee1a30c66069beb167935089d19af0574abed85eb347466\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"642387831d85a534e1533b127af802d99ac9cb79003275472138777f7e86ceb3\"" Jul 7 06:04:59.407573 containerd[1541]: time="2025-07-07T06:04:59.407032763Z" level=info msg="StartContainer for \"642387831d85a534e1533b127af802d99ac9cb79003275472138777f7e86ceb3\"" Jul 7 06:04:59.456955 containerd[1541]: time="2025-07-07T06:04:59.456869874Z" level=info msg="StartContainer for \"642387831d85a534e1533b127af802d99ac9cb79003275472138777f7e86ceb3\" returns successfully" Jul 7 06:05:01.117100 kubelet[2634]: I0707 06:05:01.117054 2634 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:05:01.117100 kubelet[2634]: E0707 06:05:01.117419 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:01.146882 kubelet[2634]: I0707 06:05:01.146806 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-649b8769d8-lqt6b" podStartSLOduration=2.818368505 podStartE2EDuration="5.146780541s" podCreationTimestamp="2025-07-07 06:04:56 +0000 UTC" firstStartedPulling="2025-07-07 06:04:57.065121222 +0000 UTC m=+32.932974325" lastFinishedPulling="2025-07-07 06:04:59.393533298 +0000 UTC m=+35.261386361" observedRunningTime="2025-07-07 06:05:00.370697382 +0000 UTC m=+36.238550525" watchObservedRunningTime="2025-07-07 06:05:01.146780541 +0000 UTC m=+37.014633644" Jul 7 06:05:01.361974 kubelet[2634]: E0707 06:05:01.361933 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:01.747922 kernel: bpftool[4400]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 7 06:05:01.928429 systemd-networkd[1228]: vxlan.calico: Link UP Jul 7 06:05:01.928437 systemd-networkd[1228]: vxlan.calico: Gained carrier Jul 7 06:05:02.214581 containerd[1541]: time="2025-07-07T06:05:02.214473185Z" level=info msg="StopPodSandbox for \"246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0\"" Jul 7 06:05:02.214963 containerd[1541]: time="2025-07-07T06:05:02.214654321Z" level=info msg="StopPodSandbox for \"4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966\"" Jul 7 06:05:02.321223 containerd[1541]: 2025-07-07 06:05:02.272 [INFO][4543] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" Jul 7 06:05:02.321223 containerd[1541]: 2025-07-07 06:05:02.272 [INFO][4543] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" iface="eth0" netns="/var/run/netns/cni-34279385-03be-4808-6574-d3677e3ac2cf" Jul 7 06:05:02.321223 containerd[1541]: 2025-07-07 06:05:02.273 [INFO][4543] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" iface="eth0" netns="/var/run/netns/cni-34279385-03be-4808-6574-d3677e3ac2cf" Jul 7 06:05:02.321223 containerd[1541]: 2025-07-07 06:05:02.273 [INFO][4543] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" iface="eth0" netns="/var/run/netns/cni-34279385-03be-4808-6574-d3677e3ac2cf" Jul 7 06:05:02.321223 containerd[1541]: 2025-07-07 06:05:02.273 [INFO][4543] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" Jul 7 06:05:02.321223 containerd[1541]: 2025-07-07 06:05:02.273 [INFO][4543] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" Jul 7 06:05:02.321223 containerd[1541]: 2025-07-07 06:05:02.301 [INFO][4558] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" HandleID="k8s-pod-network.246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" Workload="localhost-k8s-calico--apiserver--55f9545c55--sb8ps-eth0" Jul 7 06:05:02.321223 containerd[1541]: 2025-07-07 06:05:02.301 [INFO][4558] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:02.321223 containerd[1541]: 2025-07-07 06:05:02.301 [INFO][4558] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:02.321223 containerd[1541]: 2025-07-07 06:05:02.311 [WARNING][4558] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" HandleID="k8s-pod-network.246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" Workload="localhost-k8s-calico--apiserver--55f9545c55--sb8ps-eth0" Jul 7 06:05:02.321223 containerd[1541]: 2025-07-07 06:05:02.311 [INFO][4558] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" HandleID="k8s-pod-network.246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" Workload="localhost-k8s-calico--apiserver--55f9545c55--sb8ps-eth0" Jul 7 06:05:02.321223 containerd[1541]: 2025-07-07 06:05:02.317 [INFO][4558] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:02.321223 containerd[1541]: 2025-07-07 06:05:02.319 [INFO][4543] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" Jul 7 06:05:02.323704 systemd[1]: run-netns-cni\x2d34279385\x2d03be\x2d4808\x2d6574\x2dd3677e3ac2cf.mount: Deactivated successfully. Jul 7 06:05:02.325456 containerd[1541]: time="2025-07-07T06:05:02.325419581Z" level=info msg="TearDown network for sandbox \"246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0\" successfully" Jul 7 06:05:02.325456 containerd[1541]: time="2025-07-07T06:05:02.325456864Z" level=info msg="StopPodSandbox for \"246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0\" returns successfully" Jul 7 06:05:02.326847 containerd[1541]: time="2025-07-07T06:05:02.326816740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55f9545c55-sb8ps,Uid:f43cdfb4-d450-43db-b66c-b3c84cc5b1e9,Namespace:calico-apiserver,Attempt:1,}" Jul 7 06:05:02.333544 containerd[1541]: 2025-07-07 06:05:02.284 [INFO][4542] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" Jul 7 06:05:02.333544 containerd[1541]: 2025-07-07 06:05:02.284 [INFO][4542] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" iface="eth0" netns="/var/run/netns/cni-690f5827-73a3-a08a-d971-d377d19542ee" Jul 7 06:05:02.333544 containerd[1541]: 2025-07-07 06:05:02.285 [INFO][4542] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" iface="eth0" netns="/var/run/netns/cni-690f5827-73a3-a08a-d971-d377d19542ee" Jul 7 06:05:02.333544 containerd[1541]: 2025-07-07 06:05:02.285 [INFO][4542] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" iface="eth0" netns="/var/run/netns/cni-690f5827-73a3-a08a-d971-d377d19542ee" Jul 7 06:05:02.333544 containerd[1541]: 2025-07-07 06:05:02.285 [INFO][4542] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" Jul 7 06:05:02.333544 containerd[1541]: 2025-07-07 06:05:02.285 [INFO][4542] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" Jul 7 06:05:02.333544 containerd[1541]: 2025-07-07 06:05:02.306 [INFO][4565] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" HandleID="k8s-pod-network.4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" Workload="localhost-k8s-calico--apiserver--55f9545c55--rznvr-eth0" Jul 7 06:05:02.333544 containerd[1541]: 2025-07-07 06:05:02.306 [INFO][4565] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:02.333544 containerd[1541]: 2025-07-07 06:05:02.317 [INFO][4565] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:02.333544 containerd[1541]: 2025-07-07 06:05:02.328 [WARNING][4565] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" HandleID="k8s-pod-network.4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" Workload="localhost-k8s-calico--apiserver--55f9545c55--rznvr-eth0" Jul 7 06:05:02.333544 containerd[1541]: 2025-07-07 06:05:02.328 [INFO][4565] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" HandleID="k8s-pod-network.4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" Workload="localhost-k8s-calico--apiserver--55f9545c55--rznvr-eth0" Jul 7 06:05:02.333544 containerd[1541]: 2025-07-07 06:05:02.329 [INFO][4565] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:02.333544 containerd[1541]: 2025-07-07 06:05:02.331 [INFO][4542] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" Jul 7 06:05:02.333962 containerd[1541]: time="2025-07-07T06:05:02.333799657Z" level=info msg="TearDown network for sandbox \"4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966\" successfully" Jul 7 06:05:02.333962 containerd[1541]: time="2025-07-07T06:05:02.333825819Z" level=info msg="StopPodSandbox for \"4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966\" returns successfully" Jul 7 06:05:02.336432 containerd[1541]: time="2025-07-07T06:05:02.336195181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55f9545c55-rznvr,Uid:9bc4bd25-cee7-4295-98da-f48b4661ae1c,Namespace:calico-apiserver,Attempt:1,}" Jul 7 06:05:02.336292 systemd[1]: run-netns-cni\x2d690f5827\x2d73a3\x2da08a\x2dd971\x2dd377d19542ee.mount: Deactivated successfully. Jul 7 06:05:02.476775 systemd-networkd[1228]: calif32de6edb66: Link UP Jul 7 06:05:02.477811 systemd-networkd[1228]: calif32de6edb66: Gained carrier Jul 7 06:05:02.491853 containerd[1541]: 2025-07-07 06:05:02.409 [INFO][4575] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--55f9545c55--sb8ps-eth0 calico-apiserver-55f9545c55- calico-apiserver f43cdfb4-d450-43db-b66c-b3c84cc5b1e9 982 0 2025-07-07 06:04:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55f9545c55 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-55f9545c55-sb8ps eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif32de6edb66 [] [] }} ContainerID="9d5b584dd84b1f3a3330dcbe3de52911aa90f348515c9a95b281d7aed564c1be" Namespace="calico-apiserver" Pod="calico-apiserver-55f9545c55-sb8ps" WorkloadEndpoint="localhost-k8s-calico--apiserver--55f9545c55--sb8ps-" Jul 7 06:05:02.491853 containerd[1541]: 2025-07-07 06:05:02.410 [INFO][4575] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9d5b584dd84b1f3a3330dcbe3de52911aa90f348515c9a95b281d7aed564c1be" Namespace="calico-apiserver" Pod="calico-apiserver-55f9545c55-sb8ps" WorkloadEndpoint="localhost-k8s-calico--apiserver--55f9545c55--sb8ps-eth0" Jul 7 06:05:02.491853 containerd[1541]: 2025-07-07 06:05:02.439 [INFO][4604] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d5b584dd84b1f3a3330dcbe3de52911aa90f348515c9a95b281d7aed564c1be" HandleID="k8s-pod-network.9d5b584dd84b1f3a3330dcbe3de52911aa90f348515c9a95b281d7aed564c1be" Workload="localhost-k8s-calico--apiserver--55f9545c55--sb8ps-eth0" Jul 7 06:05:02.491853 containerd[1541]: 2025-07-07 06:05:02.439 [INFO][4604] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9d5b584dd84b1f3a3330dcbe3de52911aa90f348515c9a95b281d7aed564c1be" HandleID="k8s-pod-network.9d5b584dd84b1f3a3330dcbe3de52911aa90f348515c9a95b281d7aed564c1be" Workload="localhost-k8s-calico--apiserver--55f9545c55--sb8ps-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039a130), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-55f9545c55-sb8ps", "timestamp":"2025-07-07 06:05:02.439435639 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:05:02.491853 containerd[1541]: 2025-07-07 06:05:02.439 [INFO][4604] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:02.491853 containerd[1541]: 2025-07-07 06:05:02.439 [INFO][4604] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:02.491853 containerd[1541]: 2025-07-07 06:05:02.439 [INFO][4604] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:05:02.491853 containerd[1541]: 2025-07-07 06:05:02.448 [INFO][4604] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9d5b584dd84b1f3a3330dcbe3de52911aa90f348515c9a95b281d7aed564c1be" host="localhost" Jul 7 06:05:02.491853 containerd[1541]: 2025-07-07 06:05:02.452 [INFO][4604] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:05:02.491853 containerd[1541]: 2025-07-07 06:05:02.455 [INFO][4604] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:05:02.491853 containerd[1541]: 2025-07-07 06:05:02.457 [INFO][4604] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:05:02.491853 containerd[1541]: 2025-07-07 06:05:02.459 [INFO][4604] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:05:02.491853 containerd[1541]: 2025-07-07 06:05:02.459 [INFO][4604] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9d5b584dd84b1f3a3330dcbe3de52911aa90f348515c9a95b281d7aed564c1be" host="localhost" Jul 7 06:05:02.491853 containerd[1541]: 2025-07-07 06:05:02.460 [INFO][4604] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9d5b584dd84b1f3a3330dcbe3de52911aa90f348515c9a95b281d7aed564c1be Jul 7 06:05:02.491853 containerd[1541]: 2025-07-07 06:05:02.467 [INFO][4604] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9d5b584dd84b1f3a3330dcbe3de52911aa90f348515c9a95b281d7aed564c1be" host="localhost" Jul 7 06:05:02.491853 containerd[1541]: 2025-07-07 06:05:02.472 [INFO][4604] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.9d5b584dd84b1f3a3330dcbe3de52911aa90f348515c9a95b281d7aed564c1be" host="localhost" Jul 7 06:05:02.491853 containerd[1541]: 2025-07-07 06:05:02.473 [INFO][4604] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.9d5b584dd84b1f3a3330dcbe3de52911aa90f348515c9a95b281d7aed564c1be" host="localhost" Jul 7 06:05:02.491853 containerd[1541]: 2025-07-07 06:05:02.473 [INFO][4604] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:02.491853 containerd[1541]: 2025-07-07 06:05:02.473 [INFO][4604] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="9d5b584dd84b1f3a3330dcbe3de52911aa90f348515c9a95b281d7aed564c1be" HandleID="k8s-pod-network.9d5b584dd84b1f3a3330dcbe3de52911aa90f348515c9a95b281d7aed564c1be" Workload="localhost-k8s-calico--apiserver--55f9545c55--sb8ps-eth0" Jul 7 06:05:02.492382 containerd[1541]: 2025-07-07 06:05:02.475 [INFO][4575] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9d5b584dd84b1f3a3330dcbe3de52911aa90f348515c9a95b281d7aed564c1be" Namespace="calico-apiserver" Pod="calico-apiserver-55f9545c55-sb8ps" WorkloadEndpoint="localhost-k8s-calico--apiserver--55f9545c55--sb8ps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55f9545c55--sb8ps-eth0", GenerateName:"calico-apiserver-55f9545c55-", Namespace:"calico-apiserver", SelfLink:"", UID:"f43cdfb4-d450-43db-b66c-b3c84cc5b1e9", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55f9545c55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-55f9545c55-sb8ps", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif32de6edb66", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:02.492382 containerd[1541]: 2025-07-07 06:05:02.475 [INFO][4575] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="9d5b584dd84b1f3a3330dcbe3de52911aa90f348515c9a95b281d7aed564c1be" Namespace="calico-apiserver" Pod="calico-apiserver-55f9545c55-sb8ps" WorkloadEndpoint="localhost-k8s-calico--apiserver--55f9545c55--sb8ps-eth0" Jul 7 06:05:02.492382 containerd[1541]: 2025-07-07 06:05:02.475 [INFO][4575] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif32de6edb66 ContainerID="9d5b584dd84b1f3a3330dcbe3de52911aa90f348515c9a95b281d7aed564c1be" Namespace="calico-apiserver" Pod="calico-apiserver-55f9545c55-sb8ps" WorkloadEndpoint="localhost-k8s-calico--apiserver--55f9545c55--sb8ps-eth0" Jul 7 06:05:02.492382 containerd[1541]: 2025-07-07 06:05:02.477 [INFO][4575] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d5b584dd84b1f3a3330dcbe3de52911aa90f348515c9a95b281d7aed564c1be" Namespace="calico-apiserver" Pod="calico-apiserver-55f9545c55-sb8ps" WorkloadEndpoint="localhost-k8s-calico--apiserver--55f9545c55--sb8ps-eth0" Jul 7 06:05:02.492382 containerd[1541]: 2025-07-07 06:05:02.478 [INFO][4575] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9d5b584dd84b1f3a3330dcbe3de52911aa90f348515c9a95b281d7aed564c1be" Namespace="calico-apiserver" Pod="calico-apiserver-55f9545c55-sb8ps" WorkloadEndpoint="localhost-k8s-calico--apiserver--55f9545c55--sb8ps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55f9545c55--sb8ps-eth0", GenerateName:"calico-apiserver-55f9545c55-", Namespace:"calico-apiserver", SelfLink:"", UID:"f43cdfb4-d450-43db-b66c-b3c84cc5b1e9", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55f9545c55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d5b584dd84b1f3a3330dcbe3de52911aa90f348515c9a95b281d7aed564c1be", Pod:"calico-apiserver-55f9545c55-sb8ps", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif32de6edb66", MAC:"46:48:5f:c4:01:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:02.492382 containerd[1541]: 2025-07-07 06:05:02.489 [INFO][4575] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9d5b584dd84b1f3a3330dcbe3de52911aa90f348515c9a95b281d7aed564c1be" Namespace="calico-apiserver" Pod="calico-apiserver-55f9545c55-sb8ps" WorkloadEndpoint="localhost-k8s-calico--apiserver--55f9545c55--sb8ps-eth0" Jul 7 06:05:02.506744 containerd[1541]: time="2025-07-07T06:05:02.506630738Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:05:02.506744 containerd[1541]: time="2025-07-07T06:05:02.506711625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:05:02.506744 containerd[1541]: time="2025-07-07T06:05:02.506735987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:05:02.506946 containerd[1541]: time="2025-07-07T06:05:02.506828195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:05:02.545087 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:05:02.570153 containerd[1541]: time="2025-07-07T06:05:02.570095438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55f9545c55-sb8ps,Uid:f43cdfb4-d450-43db-b66c-b3c84cc5b1e9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9d5b584dd84b1f3a3330dcbe3de52911aa90f348515c9a95b281d7aed564c1be\"" Jul 7 06:05:02.571954 containerd[1541]: time="2025-07-07T06:05:02.571863389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 06:05:02.581572 systemd-networkd[1228]: cali66734f2ca06: Link UP Jul 7 06:05:02.582253 systemd-networkd[1228]: cali66734f2ca06: Gained carrier Jul 7 06:05:02.598823 containerd[1541]: 2025-07-07 06:05:02.415 [INFO][4586] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--55f9545c55--rznvr-eth0 calico-apiserver-55f9545c55- calico-apiserver 9bc4bd25-cee7-4295-98da-f48b4661ae1c 983 0 2025-07-07 06:04:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55f9545c55 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-55f9545c55-rznvr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali66734f2ca06 [] [] }} ContainerID="11835eecd6ecc34027ea5a828c0b98744ca31dd743227a1cd41701e289ce98be" Namespace="calico-apiserver" Pod="calico-apiserver-55f9545c55-rznvr" WorkloadEndpoint="localhost-k8s-calico--apiserver--55f9545c55--rznvr-" Jul 7 06:05:02.598823 containerd[1541]: 2025-07-07 06:05:02.415 [INFO][4586] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="11835eecd6ecc34027ea5a828c0b98744ca31dd743227a1cd41701e289ce98be" Namespace="calico-apiserver" Pod="calico-apiserver-55f9545c55-rznvr" WorkloadEndpoint="localhost-k8s-calico--apiserver--55f9545c55--rznvr-eth0" Jul 7 06:05:02.598823 containerd[1541]: 2025-07-07 06:05:02.440 [INFO][4606] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="11835eecd6ecc34027ea5a828c0b98744ca31dd743227a1cd41701e289ce98be" HandleID="k8s-pod-network.11835eecd6ecc34027ea5a828c0b98744ca31dd743227a1cd41701e289ce98be" Workload="localhost-k8s-calico--apiserver--55f9545c55--rznvr-eth0" Jul 7 06:05:02.598823 containerd[1541]: 2025-07-07 06:05:02.440 [INFO][4606] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="11835eecd6ecc34027ea5a828c0b98744ca31dd743227a1cd41701e289ce98be" HandleID="k8s-pod-network.11835eecd6ecc34027ea5a828c0b98744ca31dd743227a1cd41701e289ce98be" Workload="localhost-k8s-calico--apiserver--55f9545c55--rznvr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-55f9545c55-rznvr", "timestamp":"2025-07-07 06:05:02.440007088 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:05:02.598823 containerd[1541]: 2025-07-07 06:05:02.440 [INFO][4606] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:02.598823 containerd[1541]: 2025-07-07 06:05:02.473 [INFO][4606] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:02.598823 containerd[1541]: 2025-07-07 06:05:02.473 [INFO][4606] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:05:02.598823 containerd[1541]: 2025-07-07 06:05:02.551 [INFO][4606] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.11835eecd6ecc34027ea5a828c0b98744ca31dd743227a1cd41701e289ce98be" host="localhost" Jul 7 06:05:02.598823 containerd[1541]: 2025-07-07 06:05:02.555 [INFO][4606] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:05:02.598823 containerd[1541]: 2025-07-07 06:05:02.560 [INFO][4606] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:05:02.598823 containerd[1541]: 2025-07-07 06:05:02.562 [INFO][4606] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:05:02.598823 containerd[1541]: 2025-07-07 06:05:02.565 [INFO][4606] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:05:02.598823 containerd[1541]: 2025-07-07 06:05:02.565 [INFO][4606] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.11835eecd6ecc34027ea5a828c0b98744ca31dd743227a1cd41701e289ce98be" host="localhost" Jul 7 06:05:02.598823 containerd[1541]: 2025-07-07 06:05:02.566 [INFO][4606] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.11835eecd6ecc34027ea5a828c0b98744ca31dd743227a1cd41701e289ce98be Jul 7 06:05:02.598823 containerd[1541]: 2025-07-07 06:05:02.570 [INFO][4606] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.11835eecd6ecc34027ea5a828c0b98744ca31dd743227a1cd41701e289ce98be" host="localhost" Jul 7 06:05:02.598823 containerd[1541]: 2025-07-07 06:05:02.576 [INFO][4606] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.11835eecd6ecc34027ea5a828c0b98744ca31dd743227a1cd41701e289ce98be" host="localhost" Jul 7 06:05:02.598823 containerd[1541]: 2025-07-07 06:05:02.576 [INFO][4606] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.11835eecd6ecc34027ea5a828c0b98744ca31dd743227a1cd41701e289ce98be" host="localhost" Jul 7 06:05:02.598823 containerd[1541]: 2025-07-07 06:05:02.576 [INFO][4606] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:02.598823 containerd[1541]: 2025-07-07 06:05:02.576 [INFO][4606] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="11835eecd6ecc34027ea5a828c0b98744ca31dd743227a1cd41701e289ce98be" HandleID="k8s-pod-network.11835eecd6ecc34027ea5a828c0b98744ca31dd743227a1cd41701e289ce98be" Workload="localhost-k8s-calico--apiserver--55f9545c55--rznvr-eth0" Jul 7 06:05:02.599800 containerd[1541]: 2025-07-07 06:05:02.578 [INFO][4586] cni-plugin/k8s.go 418: Populated endpoint ContainerID="11835eecd6ecc34027ea5a828c0b98744ca31dd743227a1cd41701e289ce98be" Namespace="calico-apiserver" Pod="calico-apiserver-55f9545c55-rznvr" WorkloadEndpoint="localhost-k8s-calico--apiserver--55f9545c55--rznvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55f9545c55--rznvr-eth0", GenerateName:"calico-apiserver-55f9545c55-", Namespace:"calico-apiserver", SelfLink:"", UID:"9bc4bd25-cee7-4295-98da-f48b4661ae1c", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55f9545c55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-55f9545c55-rznvr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali66734f2ca06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:02.599800 containerd[1541]: 2025-07-07 06:05:02.578 [INFO][4586] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="11835eecd6ecc34027ea5a828c0b98744ca31dd743227a1cd41701e289ce98be" Namespace="calico-apiserver" Pod="calico-apiserver-55f9545c55-rznvr" WorkloadEndpoint="localhost-k8s-calico--apiserver--55f9545c55--rznvr-eth0" Jul 7 06:05:02.599800 containerd[1541]: 2025-07-07 06:05:02.578 [INFO][4586] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali66734f2ca06 ContainerID="11835eecd6ecc34027ea5a828c0b98744ca31dd743227a1cd41701e289ce98be" Namespace="calico-apiserver" Pod="calico-apiserver-55f9545c55-rznvr" WorkloadEndpoint="localhost-k8s-calico--apiserver--55f9545c55--rznvr-eth0" Jul 7 06:05:02.599800 containerd[1541]: 2025-07-07 06:05:02.581 [INFO][4586] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="11835eecd6ecc34027ea5a828c0b98744ca31dd743227a1cd41701e289ce98be" Namespace="calico-apiserver" Pod="calico-apiserver-55f9545c55-rznvr" WorkloadEndpoint="localhost-k8s-calico--apiserver--55f9545c55--rznvr-eth0" Jul 7 06:05:02.599800 containerd[1541]: 2025-07-07 06:05:02.582 [INFO][4586] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="11835eecd6ecc34027ea5a828c0b98744ca31dd743227a1cd41701e289ce98be" Namespace="calico-apiserver" Pod="calico-apiserver-55f9545c55-rznvr" WorkloadEndpoint="localhost-k8s-calico--apiserver--55f9545c55--rznvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55f9545c55--rznvr-eth0", GenerateName:"calico-apiserver-55f9545c55-", Namespace:"calico-apiserver", SelfLink:"", UID:"9bc4bd25-cee7-4295-98da-f48b4661ae1c", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55f9545c55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"11835eecd6ecc34027ea5a828c0b98744ca31dd743227a1cd41701e289ce98be", Pod:"calico-apiserver-55f9545c55-rznvr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali66734f2ca06", MAC:"62:25:ad:2b:ae:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:02.599800 containerd[1541]: 2025-07-07 06:05:02.595 [INFO][4586] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="11835eecd6ecc34027ea5a828c0b98744ca31dd743227a1cd41701e289ce98be" Namespace="calico-apiserver" Pod="calico-apiserver-55f9545c55-rznvr" WorkloadEndpoint="localhost-k8s-calico--apiserver--55f9545c55--rznvr-eth0" Jul 7 06:05:02.615729 containerd[1541]: time="2025-07-07T06:05:02.615592084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:05:02.616300 containerd[1541]: time="2025-07-07T06:05:02.616249100Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:05:02.616300 containerd[1541]: time="2025-07-07T06:05:02.616273182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:05:02.616552 containerd[1541]: time="2025-07-07T06:05:02.616499002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:05:02.643780 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:05:02.671057 containerd[1541]: time="2025-07-07T06:05:02.670999736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55f9545c55-rznvr,Uid:9bc4bd25-cee7-4295-98da-f48b4661ae1c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"11835eecd6ecc34027ea5a828c0b98744ca31dd743227a1cd41701e289ce98be\"" Jul 7 06:05:03.213854 containerd[1541]: time="2025-07-07T06:05:03.213517802Z" level=info msg="StopPodSandbox for \"33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1\"" Jul 7 06:05:03.213854 containerd[1541]: time="2025-07-07T06:05:03.214155695Z" level=info msg="StopPodSandbox for \"133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954\"" Jul 7 06:05:03.237049 systemd-networkd[1228]: vxlan.calico: Gained IPv6LL Jul 7 06:05:03.300921 containerd[1541]: 2025-07-07 06:05:03.262 [INFO][4741] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" Jul 7 06:05:03.300921 containerd[1541]: 2025-07-07 06:05:03.262 [INFO][4741] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" iface="eth0" netns="/var/run/netns/cni-6443a918-6ccc-0fa1-21d7-4947e3f7e45e" Jul 7 06:05:03.300921 containerd[1541]: 2025-07-07 06:05:03.263 [INFO][4741] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" iface="eth0" netns="/var/run/netns/cni-6443a918-6ccc-0fa1-21d7-4947e3f7e45e" Jul 7 06:05:03.300921 containerd[1541]: 2025-07-07 06:05:03.263 [INFO][4741] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" iface="eth0" netns="/var/run/netns/cni-6443a918-6ccc-0fa1-21d7-4947e3f7e45e" Jul 7 06:05:03.300921 containerd[1541]: 2025-07-07 06:05:03.263 [INFO][4741] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" Jul 7 06:05:03.300921 containerd[1541]: 2025-07-07 06:05:03.263 [INFO][4741] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" Jul 7 06:05:03.300921 containerd[1541]: 2025-07-07 06:05:03.286 [INFO][4761] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" HandleID="k8s-pod-network.33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" Workload="localhost-k8s-coredns--7c65d6cfc9--pr8nc-eth0" Jul 7 06:05:03.300921 containerd[1541]: 2025-07-07 06:05:03.286 [INFO][4761] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:03.300921 containerd[1541]: 2025-07-07 06:05:03.286 [INFO][4761] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:03.300921 containerd[1541]: 2025-07-07 06:05:03.295 [WARNING][4761] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" HandleID="k8s-pod-network.33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" Workload="localhost-k8s-coredns--7c65d6cfc9--pr8nc-eth0" Jul 7 06:05:03.300921 containerd[1541]: 2025-07-07 06:05:03.295 [INFO][4761] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" HandleID="k8s-pod-network.33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" Workload="localhost-k8s-coredns--7c65d6cfc9--pr8nc-eth0" Jul 7 06:05:03.300921 containerd[1541]: 2025-07-07 06:05:03.297 [INFO][4761] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:03.300921 containerd[1541]: 2025-07-07 06:05:03.298 [INFO][4741] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" Jul 7 06:05:03.301295 containerd[1541]: time="2025-07-07T06:05:03.301084388Z" level=info msg="TearDown network for sandbox \"33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1\" successfully" Jul 7 06:05:03.301295 containerd[1541]: time="2025-07-07T06:05:03.301123832Z" level=info msg="StopPodSandbox for \"33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1\" returns successfully" Jul 7 06:05:03.302103 kubelet[2634]: E0707 06:05:03.302074 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:03.303560 containerd[1541]: time="2025-07-07T06:05:03.302722564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pr8nc,Uid:7bb09144-47e0-49e9-802e-bdaaa9d250da,Namespace:kube-system,Attempt:1,}" Jul 7 06:05:03.314638 containerd[1541]: 2025-07-07 06:05:03.266 [INFO][4750] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" Jul 7 06:05:03.314638 containerd[1541]: 2025-07-07 06:05:03.266 [INFO][4750] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" iface="eth0" netns="/var/run/netns/cni-c4dd7dcf-d1ab-678f-e086-32c8d7a45874" Jul 7 06:05:03.314638 containerd[1541]: 2025-07-07 06:05:03.266 [INFO][4750] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" iface="eth0" netns="/var/run/netns/cni-c4dd7dcf-d1ab-678f-e086-32c8d7a45874" Jul 7 06:05:03.314638 containerd[1541]: 2025-07-07 06:05:03.267 [INFO][4750] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" iface="eth0" netns="/var/run/netns/cni-c4dd7dcf-d1ab-678f-e086-32c8d7a45874" Jul 7 06:05:03.314638 containerd[1541]: 2025-07-07 06:05:03.267 [INFO][4750] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" Jul 7 06:05:03.314638 containerd[1541]: 2025-07-07 06:05:03.267 [INFO][4750] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" Jul 7 06:05:03.314638 containerd[1541]: 2025-07-07 06:05:03.289 [INFO][4767] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" HandleID="k8s-pod-network.133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" Workload="localhost-k8s-coredns--7c65d6cfc9--dxs49-eth0" Jul 7 06:05:03.314638 containerd[1541]: 2025-07-07 06:05:03.289 [INFO][4767] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:03.314638 containerd[1541]: 2025-07-07 06:05:03.297 [INFO][4767] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:03.314638 containerd[1541]: 2025-07-07 06:05:03.308 [WARNING][4767] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" HandleID="k8s-pod-network.133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" Workload="localhost-k8s-coredns--7c65d6cfc9--dxs49-eth0" Jul 7 06:05:03.314638 containerd[1541]: 2025-07-07 06:05:03.308 [INFO][4767] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" HandleID="k8s-pod-network.133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" Workload="localhost-k8s-coredns--7c65d6cfc9--dxs49-eth0" Jul 7 06:05:03.314638 containerd[1541]: 2025-07-07 06:05:03.310 [INFO][4767] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:03.314638 containerd[1541]: 2025-07-07 06:05:03.312 [INFO][4750] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" Jul 7 06:05:03.315150 containerd[1541]: time="2025-07-07T06:05:03.315034106Z" level=info msg="TearDown network for sandbox \"133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954\" successfully" Jul 7 06:05:03.315150 containerd[1541]: time="2025-07-07T06:05:03.315060988Z" level=info msg="StopPodSandbox for \"133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954\" returns successfully" Jul 7 06:05:03.315449 kubelet[2634]: E0707 06:05:03.315396 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:03.316309 containerd[1541]: time="2025-07-07T06:05:03.315834372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dxs49,Uid:c1202884-ce77-4d20-a14f-e7b1c46573d9,Namespace:kube-system,Attempt:1,}" Jul 7 06:05:03.326201 systemd[1]: run-netns-cni\x2d6443a918\x2d6ccc\x2d0fa1\x2d21d7\x2d4947e3f7e45e.mount: Deactivated successfully. Jul 7 06:05:03.332057 systemd[1]: run-netns-cni\x2dc4dd7dcf\x2dd1ab\x2d678f\x2de086\x2d32c8d7a45874.mount: Deactivated successfully. Jul 7 06:05:03.433821 systemd-networkd[1228]: cali7950cac01f9: Link UP Jul 7 06:05:03.434125 systemd-networkd[1228]: cali7950cac01f9: Gained carrier Jul 7 06:05:03.448490 containerd[1541]: 2025-07-07 06:05:03.370 [INFO][4788] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--dxs49-eth0 coredns-7c65d6cfc9- kube-system c1202884-ce77-4d20-a14f-e7b1c46573d9 997 0 2025-07-07 06:04:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-dxs49 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7950cac01f9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0d060eabaf495521f5854d9ee02cd8aeb5b4b8431c9b092a0531117af0048dac" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dxs49" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dxs49-" Jul 7 06:05:03.448490 containerd[1541]: 2025-07-07 06:05:03.370 [INFO][4788] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0d060eabaf495521f5854d9ee02cd8aeb5b4b8431c9b092a0531117af0048dac" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dxs49" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dxs49-eth0" Jul 7 06:05:03.448490 containerd[1541]: 2025-07-07 06:05:03.394 [INFO][4809] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d060eabaf495521f5854d9ee02cd8aeb5b4b8431c9b092a0531117af0048dac" HandleID="k8s-pod-network.0d060eabaf495521f5854d9ee02cd8aeb5b4b8431c9b092a0531117af0048dac" Workload="localhost-k8s-coredns--7c65d6cfc9--dxs49-eth0" Jul 7 06:05:03.448490 containerd[1541]: 2025-07-07 06:05:03.394 [INFO][4809] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0d060eabaf495521f5854d9ee02cd8aeb5b4b8431c9b092a0531117af0048dac" HandleID="k8s-pod-network.0d060eabaf495521f5854d9ee02cd8aeb5b4b8431c9b092a0531117af0048dac" Workload="localhost-k8s-coredns--7c65d6cfc9--dxs49-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136e30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-dxs49", "timestamp":"2025-07-07 06:05:03.394772722 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:05:03.448490 containerd[1541]: 2025-07-07 06:05:03.395 [INFO][4809] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:03.448490 containerd[1541]: 2025-07-07 06:05:03.395 [INFO][4809] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:03.448490 containerd[1541]: 2025-07-07 06:05:03.395 [INFO][4809] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:05:03.448490 containerd[1541]: 2025-07-07 06:05:03.403 [INFO][4809] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0d060eabaf495521f5854d9ee02cd8aeb5b4b8431c9b092a0531117af0048dac" host="localhost" Jul 7 06:05:03.448490 containerd[1541]: 2025-07-07 06:05:03.409 [INFO][4809] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:05:03.448490 containerd[1541]: 2025-07-07 06:05:03.412 [INFO][4809] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:05:03.448490 containerd[1541]: 2025-07-07 06:05:03.414 [INFO][4809] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:05:03.448490 containerd[1541]: 2025-07-07 06:05:03.417 [INFO][4809] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:05:03.448490 containerd[1541]: 2025-07-07 06:05:03.417 [INFO][4809] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0d060eabaf495521f5854d9ee02cd8aeb5b4b8431c9b092a0531117af0048dac" host="localhost" Jul 7 06:05:03.448490 containerd[1541]: 2025-07-07 06:05:03.419 [INFO][4809] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0d060eabaf495521f5854d9ee02cd8aeb5b4b8431c9b092a0531117af0048dac Jul 7 06:05:03.448490 containerd[1541]: 2025-07-07 06:05:03.422 [INFO][4809] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0d060eabaf495521f5854d9ee02cd8aeb5b4b8431c9b092a0531117af0048dac" host="localhost" Jul 7 06:05:03.448490 containerd[1541]: 2025-07-07 06:05:03.428 [INFO][4809] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.0d060eabaf495521f5854d9ee02cd8aeb5b4b8431c9b092a0531117af0048dac" host="localhost" Jul 7 06:05:03.448490 containerd[1541]: 2025-07-07 06:05:03.428 [INFO][4809] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.0d060eabaf495521f5854d9ee02cd8aeb5b4b8431c9b092a0531117af0048dac" host="localhost" Jul 7 06:05:03.448490 containerd[1541]: 2025-07-07 06:05:03.428 [INFO][4809] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:03.448490 containerd[1541]: 2025-07-07 06:05:03.428 [INFO][4809] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="0d060eabaf495521f5854d9ee02cd8aeb5b4b8431c9b092a0531117af0048dac" HandleID="k8s-pod-network.0d060eabaf495521f5854d9ee02cd8aeb5b4b8431c9b092a0531117af0048dac" Workload="localhost-k8s-coredns--7c65d6cfc9--dxs49-eth0" Jul 7 06:05:03.449298 containerd[1541]: 2025-07-07 06:05:03.430 [INFO][4788] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0d060eabaf495521f5854d9ee02cd8aeb5b4b8431c9b092a0531117af0048dac" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dxs49" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dxs49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--dxs49-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c1202884-ce77-4d20-a14f-e7b1c46573d9", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-dxs49", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7950cac01f9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:03.449298 containerd[1541]: 2025-07-07 06:05:03.430 [INFO][4788] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="0d060eabaf495521f5854d9ee02cd8aeb5b4b8431c9b092a0531117af0048dac" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dxs49" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dxs49-eth0" Jul 7 06:05:03.449298 containerd[1541]: 2025-07-07 06:05:03.430 [INFO][4788] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7950cac01f9 ContainerID="0d060eabaf495521f5854d9ee02cd8aeb5b4b8431c9b092a0531117af0048dac" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dxs49" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dxs49-eth0" Jul 7 06:05:03.449298 containerd[1541]: 2025-07-07 06:05:03.434 [INFO][4788] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d060eabaf495521f5854d9ee02cd8aeb5b4b8431c9b092a0531117af0048dac" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dxs49" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dxs49-eth0" Jul 7 06:05:03.449298 containerd[1541]: 2025-07-07 06:05:03.435 [INFO][4788] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0d060eabaf495521f5854d9ee02cd8aeb5b4b8431c9b092a0531117af0048dac" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dxs49" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dxs49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--dxs49-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c1202884-ce77-4d20-a14f-e7b1c46573d9", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0d060eabaf495521f5854d9ee02cd8aeb5b4b8431c9b092a0531117af0048dac", Pod:"coredns-7c65d6cfc9-dxs49", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7950cac01f9", MAC:"c2:a2:12:0a:0b:87", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:03.449298 containerd[1541]: 2025-07-07 06:05:03.446 [INFO][4788] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0d060eabaf495521f5854d9ee02cd8aeb5b4b8431c9b092a0531117af0048dac" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dxs49" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--dxs49-eth0" Jul 7 06:05:03.474767 containerd[1541]: time="2025-07-07T06:05:03.463264526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:05:03.474877 containerd[1541]: time="2025-07-07T06:05:03.474692514Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:05:03.474877 containerd[1541]: time="2025-07-07T06:05:03.474709116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:05:03.476517 containerd[1541]: time="2025-07-07T06:05:03.476364533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:05:03.506874 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:05:03.531791 containerd[1541]: time="2025-07-07T06:05:03.531702165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dxs49,Uid:c1202884-ce77-4d20-a14f-e7b1c46573d9,Namespace:kube-system,Attempt:1,} returns sandbox id \"0d060eabaf495521f5854d9ee02cd8aeb5b4b8431c9b092a0531117af0048dac\"" Jul 7 06:05:03.535190 kubelet[2634]: E0707 06:05:03.535148 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:03.538363 containerd[1541]: time="2025-07-07T06:05:03.538283831Z" level=info msg="CreateContainer within sandbox \"0d060eabaf495521f5854d9ee02cd8aeb5b4b8431c9b092a0531117af0048dac\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:05:03.545535 systemd-networkd[1228]: cali6553dcd7de4: Link UP Jul 7 06:05:03.547231 systemd-networkd[1228]: cali6553dcd7de4: Gained carrier Jul 7 06:05:03.559830 containerd[1541]: time="2025-07-07T06:05:03.559780775Z" level=info msg="CreateContainer within sandbox \"0d060eabaf495521f5854d9ee02cd8aeb5b4b8431c9b092a0531117af0048dac\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"49b3b4452819e03d3ce7b19f90a5e6076a91bfb64cc1787b6be7f885d39bdea1\"" Jul 7 06:05:03.561243 containerd[1541]: time="2025-07-07T06:05:03.561214254Z" level=info msg="StartContainer for \"49b3b4452819e03d3ce7b19f90a5e6076a91bfb64cc1787b6be7f885d39bdea1\"" Jul 7 06:05:03.562341 containerd[1541]: 2025-07-07 06:05:03.369 [INFO][4778] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--pr8nc-eth0 coredns-7c65d6cfc9- kube-system 7bb09144-47e0-49e9-802e-bdaaa9d250da 996 0 2025-07-07 06:04:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-pr8nc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6553dcd7de4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="45a8cdc4a042f46fb85a4f00ba831831b8b8a6e061b9c2975ae585b51e55bc48" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pr8nc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--pr8nc-" Jul 7 06:05:03.562341 containerd[1541]: 2025-07-07 06:05:03.369 [INFO][4778] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="45a8cdc4a042f46fb85a4f00ba831831b8b8a6e061b9c2975ae585b51e55bc48" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pr8nc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--pr8nc-eth0" Jul 7 06:05:03.562341 containerd[1541]: 2025-07-07 06:05:03.395 [INFO][4807] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="45a8cdc4a042f46fb85a4f00ba831831b8b8a6e061b9c2975ae585b51e55bc48" HandleID="k8s-pod-network.45a8cdc4a042f46fb85a4f00ba831831b8b8a6e061b9c2975ae585b51e55bc48" Workload="localhost-k8s-coredns--7c65d6cfc9--pr8nc-eth0" Jul 7 06:05:03.562341 containerd[1541]: 2025-07-07 06:05:03.395 [INFO][4807] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="45a8cdc4a042f46fb85a4f00ba831831b8b8a6e061b9c2975ae585b51e55bc48" HandleID="k8s-pod-network.45a8cdc4a042f46fb85a4f00ba831831b8b8a6e061b9c2975ae585b51e55bc48" Workload="localhost-k8s-coredns--7c65d6cfc9--pr8nc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323490), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-pr8nc", "timestamp":"2025-07-07 06:05:03.395456379 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:05:03.562341 containerd[1541]: 2025-07-07 06:05:03.395 [INFO][4807] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:03.562341 containerd[1541]: 2025-07-07 06:05:03.428 [INFO][4807] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:03.562341 containerd[1541]: 2025-07-07 06:05:03.428 [INFO][4807] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:05:03.562341 containerd[1541]: 2025-07-07 06:05:03.504 [INFO][4807] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.45a8cdc4a042f46fb85a4f00ba831831b8b8a6e061b9c2975ae585b51e55bc48" host="localhost" Jul 7 06:05:03.562341 containerd[1541]: 2025-07-07 06:05:03.510 [INFO][4807] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:05:03.562341 containerd[1541]: 2025-07-07 06:05:03.515 [INFO][4807] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:05:03.562341 containerd[1541]: 2025-07-07 06:05:03.517 [INFO][4807] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:05:03.562341 containerd[1541]: 2025-07-07 06:05:03.519 [INFO][4807] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:05:03.562341 containerd[1541]: 2025-07-07 06:05:03.519 [INFO][4807] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.45a8cdc4a042f46fb85a4f00ba831831b8b8a6e061b9c2975ae585b51e55bc48" host="localhost" Jul 7 06:05:03.562341 containerd[1541]: 2025-07-07 06:05:03.521 [INFO][4807] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.45a8cdc4a042f46fb85a4f00ba831831b8b8a6e061b9c2975ae585b51e55bc48 Jul 7 06:05:03.562341 containerd[1541]: 2025-07-07 06:05:03.527 [INFO][4807] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.45a8cdc4a042f46fb85a4f00ba831831b8b8a6e061b9c2975ae585b51e55bc48" host="localhost" Jul 7 06:05:03.562341 containerd[1541]: 2025-07-07 06:05:03.533 [INFO][4807] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.45a8cdc4a042f46fb85a4f00ba831831b8b8a6e061b9c2975ae585b51e55bc48" host="localhost" Jul 7 06:05:03.562341 containerd[1541]: 2025-07-07 06:05:03.533 [INFO][4807] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.45a8cdc4a042f46fb85a4f00ba831831b8b8a6e061b9c2975ae585b51e55bc48" host="localhost" Jul 7 06:05:03.562341 containerd[1541]: 2025-07-07 06:05:03.533 [INFO][4807] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:03.562341 containerd[1541]: 2025-07-07 06:05:03.533 [INFO][4807] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="45a8cdc4a042f46fb85a4f00ba831831b8b8a6e061b9c2975ae585b51e55bc48" HandleID="k8s-pod-network.45a8cdc4a042f46fb85a4f00ba831831b8b8a6e061b9c2975ae585b51e55bc48" Workload="localhost-k8s-coredns--7c65d6cfc9--pr8nc-eth0" Jul 7 06:05:03.564328 containerd[1541]: 2025-07-07 06:05:03.536 [INFO][4778] cni-plugin/k8s.go 418: Populated endpoint ContainerID="45a8cdc4a042f46fb85a4f00ba831831b8b8a6e061b9c2975ae585b51e55bc48" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pr8nc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--pr8nc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--pr8nc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7bb09144-47e0-49e9-802e-bdaaa9d250da", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-pr8nc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6553dcd7de4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:03.564328 containerd[1541]: 2025-07-07 06:05:03.537 [INFO][4778] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="45a8cdc4a042f46fb85a4f00ba831831b8b8a6e061b9c2975ae585b51e55bc48" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pr8nc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--pr8nc-eth0" Jul 7 06:05:03.564328 containerd[1541]: 2025-07-07 06:05:03.537 [INFO][4778] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6553dcd7de4 ContainerID="45a8cdc4a042f46fb85a4f00ba831831b8b8a6e061b9c2975ae585b51e55bc48" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pr8nc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--pr8nc-eth0" Jul 7 06:05:03.564328 containerd[1541]: 2025-07-07 06:05:03.547 [INFO][4778] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="45a8cdc4a042f46fb85a4f00ba831831b8b8a6e061b9c2975ae585b51e55bc48" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pr8nc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--pr8nc-eth0" Jul 7 06:05:03.564328 containerd[1541]: 2025-07-07 06:05:03.549 [INFO][4778] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="45a8cdc4a042f46fb85a4f00ba831831b8b8a6e061b9c2975ae585b51e55bc48" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pr8nc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--pr8nc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--pr8nc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7bb09144-47e0-49e9-802e-bdaaa9d250da", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"45a8cdc4a042f46fb85a4f00ba831831b8b8a6e061b9c2975ae585b51e55bc48", Pod:"coredns-7c65d6cfc9-pr8nc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6553dcd7de4", MAC:"ae:58:f6:47:dd:4c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:03.564328 containerd[1541]: 2025-07-07 06:05:03.556 [INFO][4778] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="45a8cdc4a042f46fb85a4f00ba831831b8b8a6e061b9c2975ae585b51e55bc48" Namespace="kube-system" Pod="coredns-7c65d6cfc9-pr8nc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--pr8nc-eth0" Jul 7 06:05:03.582624 containerd[1541]: time="2025-07-07T06:05:03.582500340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:05:03.582624 containerd[1541]: time="2025-07-07T06:05:03.582557425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:05:03.582624 containerd[1541]: time="2025-07-07T06:05:03.582568946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:05:03.582816 containerd[1541]: time="2025-07-07T06:05:03.582648072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:05:03.603714 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:05:03.628413 containerd[1541]: time="2025-07-07T06:05:03.627079519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-pr8nc,Uid:7bb09144-47e0-49e9-802e-bdaaa9d250da,Namespace:kube-system,Attempt:1,} returns sandbox id \"45a8cdc4a042f46fb85a4f00ba831831b8b8a6e061b9c2975ae585b51e55bc48\"" Jul 7 06:05:03.628692 kubelet[2634]: E0707 06:05:03.628634 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:03.635667 containerd[1541]: time="2025-07-07T06:05:03.635628188Z" level=info msg="CreateContainer within sandbox \"45a8cdc4a042f46fb85a4f00ba831831b8b8a6e061b9c2975ae585b51e55bc48\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:05:03.637774 containerd[1541]: time="2025-07-07T06:05:03.636479619Z" level=info msg="StartContainer for \"49b3b4452819e03d3ce7b19f90a5e6076a91bfb64cc1787b6be7f885d39bdea1\" returns successfully" Jul 7 06:05:03.654251 containerd[1541]: time="2025-07-07T06:05:03.654097321Z" level=info msg="CreateContainer within sandbox \"45a8cdc4a042f46fb85a4f00ba831831b8b8a6e061b9c2975ae585b51e55bc48\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0ad6ab1b1fb3556cb6b0fc6290a3dcaa23ff567850cb6eec811effc90addbce4\"" Jul 7 06:05:03.655927 containerd[1541]: time="2025-07-07T06:05:03.655883189Z" level=info msg="StartContainer for \"0ad6ab1b1fb3556cb6b0fc6290a3dcaa23ff567850cb6eec811effc90addbce4\"" Jul 7 06:05:03.743379 containerd[1541]: time="2025-07-07T06:05:03.743250519Z" level=info msg="StartContainer for \"0ad6ab1b1fb3556cb6b0fc6290a3dcaa23ff567850cb6eec811effc90addbce4\" returns successfully" Jul 7 06:05:04.133586 systemd-networkd[1228]: cali66734f2ca06: Gained IPv6LL Jul 7 06:05:04.214109 containerd[1541]: time="2025-07-07T06:05:04.214058620Z" level=info msg="StopPodSandbox for \"bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516\"" Jul 7 06:05:04.215164 containerd[1541]: time="2025-07-07T06:05:04.214076542Z" level=info msg="StopPodSandbox for \"b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0\"" Jul 7 06:05:04.261360 systemd-networkd[1228]: calif32de6edb66: Gained IPv6LL Jul 7 06:05:04.337997 containerd[1541]: 2025-07-07 06:05:04.275 [INFO][5033] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" Jul 7 06:05:04.337997 containerd[1541]: 2025-07-07 06:05:04.275 [INFO][5033] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" iface="eth0" netns="/var/run/netns/cni-3f750ea3-dd2f-3a2c-8897-759ebc48c463" Jul 7 06:05:04.337997 containerd[1541]: 2025-07-07 06:05:04.275 [INFO][5033] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" iface="eth0" netns="/var/run/netns/cni-3f750ea3-dd2f-3a2c-8897-759ebc48c463" Jul 7 06:05:04.337997 containerd[1541]: 2025-07-07 06:05:04.275 [INFO][5033] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" iface="eth0" netns="/var/run/netns/cni-3f750ea3-dd2f-3a2c-8897-759ebc48c463" Jul 7 06:05:04.337997 containerd[1541]: 2025-07-07 06:05:04.275 [INFO][5033] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" Jul 7 06:05:04.337997 containerd[1541]: 2025-07-07 06:05:04.275 [INFO][5033] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" Jul 7 06:05:04.337997 containerd[1541]: 2025-07-07 06:05:04.313 [INFO][5050] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" HandleID="k8s-pod-network.b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" Workload="localhost-k8s-goldmane--58fd7646b9--qv4x5-eth0" Jul 7 06:05:04.337997 containerd[1541]: 2025-07-07 06:05:04.313 [INFO][5050] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:04.337997 containerd[1541]: 2025-07-07 06:05:04.313 [INFO][5050] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:04.337997 containerd[1541]: 2025-07-07 06:05:04.326 [WARNING][5050] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" HandleID="k8s-pod-network.b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" Workload="localhost-k8s-goldmane--58fd7646b9--qv4x5-eth0" Jul 7 06:05:04.337997 containerd[1541]: 2025-07-07 06:05:04.326 [INFO][5050] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" HandleID="k8s-pod-network.b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" Workload="localhost-k8s-goldmane--58fd7646b9--qv4x5-eth0" Jul 7 06:05:04.337997 containerd[1541]: 2025-07-07 06:05:04.331 [INFO][5050] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:04.337997 containerd[1541]: 2025-07-07 06:05:04.333 [INFO][5033] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" Jul 7 06:05:04.341697 systemd[1]: run-netns-cni\x2d3f750ea3\x2ddd2f\x2d3a2c\x2d8897\x2d759ebc48c463.mount: Deactivated successfully. Jul 7 06:05:04.342324 containerd[1541]: time="2025-07-07T06:05:04.341193000Z" level=info msg="TearDown network for sandbox \"b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0\" successfully" Jul 7 06:05:04.343465 containerd[1541]: time="2025-07-07T06:05:04.341775367Z" level=info msg="StopPodSandbox for \"b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0\" returns successfully" Jul 7 06:05:04.344315 containerd[1541]: time="2025-07-07T06:05:04.344264648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-qv4x5,Uid:3711a9de-1f07-4eac-8a8a-7a0a57ec740f,Namespace:calico-system,Attempt:1,}" Jul 7 06:05:04.351551 containerd[1541]: 2025-07-07 06:05:04.282 [INFO][5034] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" Jul 7 06:05:04.351551 containerd[1541]: 2025-07-07 06:05:04.282 [INFO][5034] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" iface="eth0" netns="/var/run/netns/cni-b0e0c7bf-2e3b-e07c-54d1-57c459e61281" Jul 7 06:05:04.351551 containerd[1541]: 2025-07-07 06:05:04.286 [INFO][5034] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" iface="eth0" netns="/var/run/netns/cni-b0e0c7bf-2e3b-e07c-54d1-57c459e61281" Jul 7 06:05:04.351551 containerd[1541]: 2025-07-07 06:05:04.286 [INFO][5034] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" iface="eth0" netns="/var/run/netns/cni-b0e0c7bf-2e3b-e07c-54d1-57c459e61281" Jul 7 06:05:04.351551 containerd[1541]: 2025-07-07 06:05:04.286 [INFO][5034] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" Jul 7 06:05:04.351551 containerd[1541]: 2025-07-07 06:05:04.286 [INFO][5034] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" Jul 7 06:05:04.351551 containerd[1541]: 2025-07-07 06:05:04.322 [INFO][5056] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" HandleID="k8s-pod-network.bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" Workload="localhost-k8s-csi--node--driver--4crgb-eth0" Jul 7 06:05:04.351551 containerd[1541]: 2025-07-07 06:05:04.322 [INFO][5056] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:04.351551 containerd[1541]: 2025-07-07 06:05:04.331 [INFO][5056] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:04.351551 containerd[1541]: 2025-07-07 06:05:04.344 [WARNING][5056] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" HandleID="k8s-pod-network.bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" Workload="localhost-k8s-csi--node--driver--4crgb-eth0" Jul 7 06:05:04.351551 containerd[1541]: 2025-07-07 06:05:04.344 [INFO][5056] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" HandleID="k8s-pod-network.bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" Workload="localhost-k8s-csi--node--driver--4crgb-eth0" Jul 7 06:05:04.351551 containerd[1541]: 2025-07-07 06:05:04.345 [INFO][5056] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:04.351551 containerd[1541]: 2025-07-07 06:05:04.348 [INFO][5034] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" Jul 7 06:05:04.351551 containerd[1541]: time="2025-07-07T06:05:04.351623562Z" level=info msg="TearDown network for sandbox \"bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516\" successfully" Jul 7 06:05:04.351551 containerd[1541]: time="2025-07-07T06:05:04.351648364Z" level=info msg="StopPodSandbox for \"bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516\" returns successfully" Jul 7 06:05:04.352418 containerd[1541]: time="2025-07-07T06:05:04.352301857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4crgb,Uid:9daedc13-d72f-4853-892c-86b97bad3b56,Namespace:calico-system,Attempt:1,}" Jul 7 06:05:04.354305 systemd[1]: run-netns-cni\x2db0e0c7bf\x2d2e3b\x2de07c\x2d54d1\x2d57c459e61281.mount: Deactivated successfully. Jul 7 06:05:04.380448 kubelet[2634]: E0707 06:05:04.380368 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:04.435269 kubelet[2634]: I0707 06:05:04.433611 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-pr8nc" podStartSLOduration=34.433592697 podStartE2EDuration="34.433592697s" podCreationTimestamp="2025-07-07 06:04:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:05:04.403364777 +0000 UTC m=+40.271217880" watchObservedRunningTime="2025-07-07 06:05:04.433592697 +0000 UTC m=+40.301445800" Jul 7 06:05:04.445960 kubelet[2634]: E0707 06:05:04.443999 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:04.453411 systemd-networkd[1228]: cali7950cac01f9: Gained IPv6LL Jul 7 06:05:04.498240 kubelet[2634]: I0707 06:05:04.497059 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-dxs49" podStartSLOduration=34.497039337 podStartE2EDuration="34.497039337s" podCreationTimestamp="2025-07-07 06:04:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:05:04.491160262 +0000 UTC m=+40.359013365" watchObservedRunningTime="2025-07-07 06:05:04.497039337 +0000 UTC m=+40.364892440" Jul 7 06:05:04.645782 systemd-networkd[1228]: cali6553dcd7de4: Gained IPv6LL Jul 7 06:05:04.649447 systemd-networkd[1228]: calia36527f8ac4: Link UP Jul 7 06:05:04.650165 systemd-networkd[1228]: calia36527f8ac4: Gained carrier Jul 7 06:05:04.663331 containerd[1541]: 2025-07-07 06:05:04.520 [INFO][5065] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--qv4x5-eth0 goldmane-58fd7646b9- calico-system 3711a9de-1f07-4eac-8a8a-7a0a57ec740f 1019 0 2025-07-07 06:04:43 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-qv4x5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia36527f8ac4 [] [] }} ContainerID="b0d301724555424347b0f2a3ce8c6b4f3144286a5245406c359429e322750344" Namespace="calico-system" Pod="goldmane-58fd7646b9-qv4x5" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--qv4x5-" Jul 7 06:05:04.663331 containerd[1541]: 2025-07-07 06:05:04.520 [INFO][5065] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b0d301724555424347b0f2a3ce8c6b4f3144286a5245406c359429e322750344" Namespace="calico-system" Pod="goldmane-58fd7646b9-qv4x5" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--qv4x5-eth0" Jul 7 06:05:04.663331 containerd[1541]: 2025-07-07 06:05:04.551 [INFO][5098] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0d301724555424347b0f2a3ce8c6b4f3144286a5245406c359429e322750344" HandleID="k8s-pod-network.b0d301724555424347b0f2a3ce8c6b4f3144286a5245406c359429e322750344" Workload="localhost-k8s-goldmane--58fd7646b9--qv4x5-eth0" Jul 7 06:05:04.663331 containerd[1541]: 2025-07-07 06:05:04.551 [INFO][5098] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b0d301724555424347b0f2a3ce8c6b4f3144286a5245406c359429e322750344" HandleID="k8s-pod-network.b0d301724555424347b0f2a3ce8c6b4f3144286a5245406c359429e322750344" Workload="localhost-k8s-goldmane--58fd7646b9--qv4x5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c30f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-qv4x5", "timestamp":"2025-07-07 06:05:04.551322518 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:05:04.663331 containerd[1541]: 2025-07-07 06:05:04.551 [INFO][5098] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:04.663331 containerd[1541]: 2025-07-07 06:05:04.551 [INFO][5098] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:04.663331 containerd[1541]: 2025-07-07 06:05:04.551 [INFO][5098] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:05:04.663331 containerd[1541]: 2025-07-07 06:05:04.565 [INFO][5098] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b0d301724555424347b0f2a3ce8c6b4f3144286a5245406c359429e322750344" host="localhost" Jul 7 06:05:04.663331 containerd[1541]: 2025-07-07 06:05:04.573 [INFO][5098] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:05:04.663331 containerd[1541]: 2025-07-07 06:05:04.582 [INFO][5098] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:05:04.663331 containerd[1541]: 2025-07-07 06:05:04.584 [INFO][5098] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:05:04.663331 containerd[1541]: 2025-07-07 06:05:04.586 [INFO][5098] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:05:04.663331 containerd[1541]: 2025-07-07 06:05:04.586 [INFO][5098] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b0d301724555424347b0f2a3ce8c6b4f3144286a5245406c359429e322750344" host="localhost" Jul 7 06:05:04.663331 containerd[1541]: 2025-07-07 06:05:04.588 [INFO][5098] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b0d301724555424347b0f2a3ce8c6b4f3144286a5245406c359429e322750344 Jul 7 06:05:04.663331 containerd[1541]: 2025-07-07 06:05:04.595 [INFO][5098] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b0d301724555424347b0f2a3ce8c6b4f3144286a5245406c359429e322750344" host="localhost" Jul 7 06:05:04.663331 containerd[1541]: 2025-07-07 06:05:04.614 [INFO][5098] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.b0d301724555424347b0f2a3ce8c6b4f3144286a5245406c359429e322750344" host="localhost" Jul 7 06:05:04.663331 containerd[1541]: 2025-07-07 06:05:04.614 [INFO][5098] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.b0d301724555424347b0f2a3ce8c6b4f3144286a5245406c359429e322750344" host="localhost" Jul 7 06:05:04.663331 containerd[1541]: 2025-07-07 06:05:04.614 [INFO][5098] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:04.663331 containerd[1541]: 2025-07-07 06:05:04.614 [INFO][5098] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="b0d301724555424347b0f2a3ce8c6b4f3144286a5245406c359429e322750344" HandleID="k8s-pod-network.b0d301724555424347b0f2a3ce8c6b4f3144286a5245406c359429e322750344" Workload="localhost-k8s-goldmane--58fd7646b9--qv4x5-eth0" Jul 7 06:05:04.663909 containerd[1541]: 2025-07-07 06:05:04.642 [INFO][5065] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b0d301724555424347b0f2a3ce8c6b4f3144286a5245406c359429e322750344" Namespace="calico-system" Pod="goldmane-58fd7646b9-qv4x5" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--qv4x5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--qv4x5-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"3711a9de-1f07-4eac-8a8a-7a0a57ec740f", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-qv4x5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia36527f8ac4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:04.663909 containerd[1541]: 2025-07-07 06:05:04.642 [INFO][5065] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="b0d301724555424347b0f2a3ce8c6b4f3144286a5245406c359429e322750344" Namespace="calico-system" Pod="goldmane-58fd7646b9-qv4x5" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--qv4x5-eth0" Jul 7 06:05:04.663909 containerd[1541]: 2025-07-07 06:05:04.642 [INFO][5065] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia36527f8ac4 ContainerID="b0d301724555424347b0f2a3ce8c6b4f3144286a5245406c359429e322750344" Namespace="calico-system" Pod="goldmane-58fd7646b9-qv4x5" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--qv4x5-eth0" Jul 7 06:05:04.663909 containerd[1541]: 2025-07-07 06:05:04.649 [INFO][5065] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b0d301724555424347b0f2a3ce8c6b4f3144286a5245406c359429e322750344" Namespace="calico-system" Pod="goldmane-58fd7646b9-qv4x5" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--qv4x5-eth0" Jul 7 06:05:04.663909 containerd[1541]: 2025-07-07 06:05:04.651 [INFO][5065] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b0d301724555424347b0f2a3ce8c6b4f3144286a5245406c359429e322750344" Namespace="calico-system" Pod="goldmane-58fd7646b9-qv4x5" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--qv4x5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--qv4x5-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"3711a9de-1f07-4eac-8a8a-7a0a57ec740f", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b0d301724555424347b0f2a3ce8c6b4f3144286a5245406c359429e322750344", Pod:"goldmane-58fd7646b9-qv4x5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia36527f8ac4", MAC:"86:87:b3:1f:bf:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:04.663909 containerd[1541]: 2025-07-07 06:05:04.660 [INFO][5065] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b0d301724555424347b0f2a3ce8c6b4f3144286a5245406c359429e322750344" Namespace="calico-system" Pod="goldmane-58fd7646b9-qv4x5" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--qv4x5-eth0" Jul 7 06:05:04.729491 systemd-networkd[1228]: caliedc47bd32fc: Link UP Jul 7 06:05:04.730318 systemd-networkd[1228]: caliedc47bd32fc: Gained carrier Jul 7 06:05:04.750315 containerd[1541]: 2025-07-07 06:05:04.552 [INFO][5080] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--4crgb-eth0 csi-node-driver- calico-system 9daedc13-d72f-4853-892c-86b97bad3b56 1020 0 2025-07-07 06:04:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-4crgb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliedc47bd32fc [] [] }} ContainerID="69845ff0c551e6a34030f7eebfee1d22397e9b5ef914204ba68170470fa11151" Namespace="calico-system" Pod="csi-node-driver-4crgb" WorkloadEndpoint="localhost-k8s-csi--node--driver--4crgb-" Jul 7 06:05:04.750315 containerd[1541]: 2025-07-07 06:05:04.552 [INFO][5080] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="69845ff0c551e6a34030f7eebfee1d22397e9b5ef914204ba68170470fa11151" Namespace="calico-system" Pod="csi-node-driver-4crgb" WorkloadEndpoint="localhost-k8s-csi--node--driver--4crgb-eth0" Jul 7 06:05:04.750315 containerd[1541]: 2025-07-07 06:05:04.615 [INFO][5109] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69845ff0c551e6a34030f7eebfee1d22397e9b5ef914204ba68170470fa11151" HandleID="k8s-pod-network.69845ff0c551e6a34030f7eebfee1d22397e9b5ef914204ba68170470fa11151" Workload="localhost-k8s-csi--node--driver--4crgb-eth0" Jul 7 06:05:04.750315 containerd[1541]: 2025-07-07 06:05:04.615 [INFO][5109] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="69845ff0c551e6a34030f7eebfee1d22397e9b5ef914204ba68170470fa11151" HandleID="k8s-pod-network.69845ff0c551e6a34030f7eebfee1d22397e9b5ef914204ba68170470fa11151" Workload="localhost-k8s-csi--node--driver--4crgb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2380), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-4crgb", "timestamp":"2025-07-07 06:05:04.615304721 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:05:04.750315 containerd[1541]: 2025-07-07 06:05:04.615 [INFO][5109] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:04.750315 containerd[1541]: 2025-07-07 06:05:04.615 [INFO][5109] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:04.750315 containerd[1541]: 2025-07-07 06:05:04.615 [INFO][5109] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:05:04.750315 containerd[1541]: 2025-07-07 06:05:04.666 [INFO][5109] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.69845ff0c551e6a34030f7eebfee1d22397e9b5ef914204ba68170470fa11151" host="localhost" Jul 7 06:05:04.750315 containerd[1541]: 2025-07-07 06:05:04.674 [INFO][5109] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:05:04.750315 containerd[1541]: 2025-07-07 06:05:04.680 [INFO][5109] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:05:04.750315 containerd[1541]: 2025-07-07 06:05:04.700 [INFO][5109] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:05:04.750315 containerd[1541]: 2025-07-07 06:05:04.703 [INFO][5109] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:05:04.750315 containerd[1541]: 2025-07-07 06:05:04.703 [INFO][5109] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.69845ff0c551e6a34030f7eebfee1d22397e9b5ef914204ba68170470fa11151" host="localhost" Jul 7 06:05:04.750315 containerd[1541]: 2025-07-07 06:05:04.705 [INFO][5109] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.69845ff0c551e6a34030f7eebfee1d22397e9b5ef914204ba68170470fa11151 Jul 7 06:05:04.750315 containerd[1541]: 2025-07-07 06:05:04.710 [INFO][5109] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.69845ff0c551e6a34030f7eebfee1d22397e9b5ef914204ba68170470fa11151" host="localhost" Jul 7 06:05:04.750315 containerd[1541]: 2025-07-07 06:05:04.719 [INFO][5109] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.69845ff0c551e6a34030f7eebfee1d22397e9b5ef914204ba68170470fa11151" host="localhost" Jul 7 06:05:04.750315 containerd[1541]: 2025-07-07 06:05:04.719 [INFO][5109] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.69845ff0c551e6a34030f7eebfee1d22397e9b5ef914204ba68170470fa11151" host="localhost" Jul 7 06:05:04.750315 containerd[1541]: 2025-07-07 06:05:04.719 [INFO][5109] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:04.750315 containerd[1541]: 2025-07-07 06:05:04.719 [INFO][5109] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="69845ff0c551e6a34030f7eebfee1d22397e9b5ef914204ba68170470fa11151" HandleID="k8s-pod-network.69845ff0c551e6a34030f7eebfee1d22397e9b5ef914204ba68170470fa11151" Workload="localhost-k8s-csi--node--driver--4crgb-eth0" Jul 7 06:05:04.751516 containerd[1541]: 2025-07-07 06:05:04.723 [INFO][5080] cni-plugin/k8s.go 418: Populated endpoint ContainerID="69845ff0c551e6a34030f7eebfee1d22397e9b5ef914204ba68170470fa11151" Namespace="calico-system" Pod="csi-node-driver-4crgb" WorkloadEndpoint="localhost-k8s-csi--node--driver--4crgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4crgb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9daedc13-d72f-4853-892c-86b97bad3b56", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-4crgb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliedc47bd32fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:04.751516 containerd[1541]: 2025-07-07 06:05:04.723 [INFO][5080] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="69845ff0c551e6a34030f7eebfee1d22397e9b5ef914204ba68170470fa11151" Namespace="calico-system" Pod="csi-node-driver-4crgb" WorkloadEndpoint="localhost-k8s-csi--node--driver--4crgb-eth0" Jul 7 06:05:04.751516 containerd[1541]: 2025-07-07 06:05:04.723 [INFO][5080] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliedc47bd32fc ContainerID="69845ff0c551e6a34030f7eebfee1d22397e9b5ef914204ba68170470fa11151" Namespace="calico-system" Pod="csi-node-driver-4crgb" WorkloadEndpoint="localhost-k8s-csi--node--driver--4crgb-eth0" Jul 7 06:05:04.751516 containerd[1541]: 2025-07-07 06:05:04.730 [INFO][5080] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="69845ff0c551e6a34030f7eebfee1d22397e9b5ef914204ba68170470fa11151" Namespace="calico-system" Pod="csi-node-driver-4crgb" WorkloadEndpoint="localhost-k8s-csi--node--driver--4crgb-eth0" Jul 7 06:05:04.751516 containerd[1541]: 2025-07-07 06:05:04.730 [INFO][5080] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="69845ff0c551e6a34030f7eebfee1d22397e9b5ef914204ba68170470fa11151" Namespace="calico-system" Pod="csi-node-driver-4crgb" WorkloadEndpoint="localhost-k8s-csi--node--driver--4crgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4crgb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9daedc13-d72f-4853-892c-86b97bad3b56", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"69845ff0c551e6a34030f7eebfee1d22397e9b5ef914204ba68170470fa11151", Pod:"csi-node-driver-4crgb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliedc47bd32fc", MAC:"be:83:bc:73:e5:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:04.751516 containerd[1541]: 2025-07-07 06:05:04.741 [INFO][5080] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="69845ff0c551e6a34030f7eebfee1d22397e9b5ef914204ba68170470fa11151" Namespace="calico-system" Pod="csi-node-driver-4crgb" WorkloadEndpoint="localhost-k8s-csi--node--driver--4crgb-eth0" Jul 7 06:05:04.763722 containerd[1541]: time="2025-07-07T06:05:04.763551205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:05:04.764464 containerd[1541]: time="2025-07-07T06:05:04.764418955Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:05:04.764464 containerd[1541]: time="2025-07-07T06:05:04.764444597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:05:04.764601 containerd[1541]: time="2025-07-07T06:05:04.764546725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:05:04.772045 containerd[1541]: time="2025-07-07T06:05:04.771861155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:05:04.772045 containerd[1541]: time="2025-07-07T06:05:04.771964124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:05:04.772045 containerd[1541]: time="2025-07-07T06:05:04.771984965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:05:04.772201 containerd[1541]: time="2025-07-07T06:05:04.772081893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:05:04.797059 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:05:04.802738 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:05:04.808308 containerd[1541]: time="2025-07-07T06:05:04.808207728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4crgb,Uid:9daedc13-d72f-4853-892c-86b97bad3b56,Namespace:calico-system,Attempt:1,} returns sandbox id \"69845ff0c551e6a34030f7eebfee1d22397e9b5ef914204ba68170470fa11151\"" Jul 7 06:05:04.808308 containerd[1541]: time="2025-07-07T06:05:04.808259493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:04.809436 containerd[1541]: time="2025-07-07T06:05:04.809403505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 7 06:05:04.810373 containerd[1541]: time="2025-07-07T06:05:04.810234652Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:04.813649 containerd[1541]: time="2025-07-07T06:05:04.812359543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:04.814035 containerd[1541]: time="2025-07-07T06:05:04.813801460Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 2.241882866s" Jul 7 06:05:04.814081 containerd[1541]: time="2025-07-07T06:05:04.814037399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 7 06:05:04.815373 containerd[1541]: time="2025-07-07T06:05:04.815296900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 06:05:04.818286 containerd[1541]: time="2025-07-07T06:05:04.818251859Z" level=info msg="CreateContainer within sandbox \"9d5b584dd84b1f3a3330dcbe3de52911aa90f348515c9a95b281d7aed564c1be\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:05:04.829648 containerd[1541]: time="2025-07-07T06:05:04.829576213Z" level=info msg="CreateContainer within sandbox \"9d5b584dd84b1f3a3330dcbe3de52911aa90f348515c9a95b281d7aed564c1be\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b40b0c4fdce09a27d53a8e989e09e18227ca158037e50dacbaeb127040edd53d\"" Jul 7 06:05:04.831210 containerd[1541]: time="2025-07-07T06:05:04.831143699Z" level=info msg="StartContainer for \"b40b0c4fdce09a27d53a8e989e09e18227ca158037e50dacbaeb127040edd53d\"" Jul 7 06:05:04.831210 containerd[1541]: time="2025-07-07T06:05:04.831186223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-qv4x5,Uid:3711a9de-1f07-4eac-8a8a-7a0a57ec740f,Namespace:calico-system,Attempt:1,} returns sandbox id \"b0d301724555424347b0f2a3ce8c6b4f3144286a5245406c359429e322750344\"" Jul 7 06:05:04.872246 systemd[1]: Started sshd@7-10.0.0.84:22-10.0.0.1:48760.service - OpenSSH per-connection server daemon (10.0.0.1:48760). Jul 7 06:05:04.899655 containerd[1541]: time="2025-07-07T06:05:04.898845243Z" level=info msg="StartContainer for \"b40b0c4fdce09a27d53a8e989e09e18227ca158037e50dacbaeb127040edd53d\" returns successfully" Jul 7 06:05:04.921276 sshd[5245]: Accepted publickey for core from 10.0.0.1 port 48760 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:05:04.922985 sshd[5245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:04.932480 systemd-logind[1522]: New session 8 of user core. Jul 7 06:05:04.940130 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 06:05:05.065293 containerd[1541]: time="2025-07-07T06:05:05.065213654Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:05.066006 containerd[1541]: time="2025-07-07T06:05:05.065973113Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 06:05:05.069461 containerd[1541]: time="2025-07-07T06:05:05.069420624Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 254.08816ms" Jul 7 06:05:05.069515 containerd[1541]: time="2025-07-07T06:05:05.069472228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 7 06:05:05.070377 containerd[1541]: time="2025-07-07T06:05:05.070352417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 06:05:05.072236 containerd[1541]: time="2025-07-07T06:05:05.072201963Z" level=info msg="CreateContainer within sandbox \"11835eecd6ecc34027ea5a828c0b98744ca31dd743227a1cd41701e289ce98be\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:05:05.081035 containerd[1541]: time="2025-07-07T06:05:05.080991853Z" level=info msg="CreateContainer within sandbox \"11835eecd6ecc34027ea5a828c0b98744ca31dd743227a1cd41701e289ce98be\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"98e8005008c98e62035fa2b525ee1e572dda2d8953ee0a2b273a779fae16a6b1\"" Jul 7 06:05:05.081808 containerd[1541]: time="2025-07-07T06:05:05.081774675Z" level=info msg="StartContainer for \"98e8005008c98e62035fa2b525ee1e572dda2d8953ee0a2b273a779fae16a6b1\"" Jul 7 06:05:05.165420 containerd[1541]: time="2025-07-07T06:05:05.165376603Z" level=info msg="StartContainer for \"98e8005008c98e62035fa2b525ee1e572dda2d8953ee0a2b273a779fae16a6b1\" returns successfully" Jul 7 06:05:05.201473 sshd[5245]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:05.204644 systemd-logind[1522]: Session 8 logged out. Waiting for processes to exit. Jul 7 06:05:05.205412 systemd[1]: sshd@7-10.0.0.84:22-10.0.0.1:48760.service: Deactivated successfully. Jul 7 06:05:05.207740 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 06:05:05.208856 systemd-logind[1522]: Removed session 8. Jul 7 06:05:05.426070 kubelet[2634]: E0707 06:05:05.425957 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:05.427754 kubelet[2634]: E0707 06:05:05.427728 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:05.434408 kubelet[2634]: I0707 06:05:05.434330 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55f9545c55-sb8ps" podStartSLOduration=25.191116398 podStartE2EDuration="27.434314012s" podCreationTimestamp="2025-07-07 06:04:38 +0000 UTC" firstStartedPulling="2025-07-07 06:05:02.571535481 +0000 UTC m=+38.439388584" lastFinishedPulling="2025-07-07 06:05:04.814733095 +0000 UTC m=+40.682586198" observedRunningTime="2025-07-07 06:05:05.432410102 +0000 UTC m=+41.300263245" watchObservedRunningTime="2025-07-07 06:05:05.434314012 +0000 UTC m=+41.302167115" Jul 7 06:05:05.446534 kubelet[2634]: I0707 06:05:05.446467 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55f9545c55-rznvr" podStartSLOduration=25.048451999 podStartE2EDuration="27.446447325s" podCreationTimestamp="2025-07-07 06:04:38 +0000 UTC" firstStartedPulling="2025-07-07 06:05:02.67221344 +0000 UTC m=+38.540066543" lastFinishedPulling="2025-07-07 06:05:05.070208766 +0000 UTC m=+40.938061869" observedRunningTime="2025-07-07 06:05:05.443303638 +0000 UTC m=+41.311156741" watchObservedRunningTime="2025-07-07 06:05:05.446447325 +0000 UTC m=+41.314300428" Jul 7 06:05:06.215247 containerd[1541]: time="2025-07-07T06:05:06.214285503Z" level=info msg="StopPodSandbox for \"4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c\"" Jul 7 06:05:06.326806 containerd[1541]: 2025-07-07 06:05:06.273 [INFO][5330] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" Jul 7 06:05:06.326806 containerd[1541]: 2025-07-07 06:05:06.273 [INFO][5330] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" iface="eth0" netns="/var/run/netns/cni-61aa791e-3fc5-8ffa-00e0-1ed11477662c" Jul 7 06:05:06.326806 containerd[1541]: 2025-07-07 06:05:06.273 [INFO][5330] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" iface="eth0" netns="/var/run/netns/cni-61aa791e-3fc5-8ffa-00e0-1ed11477662c" Jul 7 06:05:06.326806 containerd[1541]: 2025-07-07 06:05:06.273 [INFO][5330] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" iface="eth0" netns="/var/run/netns/cni-61aa791e-3fc5-8ffa-00e0-1ed11477662c" Jul 7 06:05:06.326806 containerd[1541]: 2025-07-07 06:05:06.273 [INFO][5330] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" Jul 7 06:05:06.326806 containerd[1541]: 2025-07-07 06:05:06.273 [INFO][5330] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" Jul 7 06:05:06.326806 containerd[1541]: 2025-07-07 06:05:06.308 [INFO][5343] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" HandleID="k8s-pod-network.4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" Workload="localhost-k8s-calico--kube--controllers--56d994cf89--m95g4-eth0" Jul 7 06:05:06.326806 containerd[1541]: 2025-07-07 06:05:06.308 [INFO][5343] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:06.326806 containerd[1541]: 2025-07-07 06:05:06.308 [INFO][5343] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:06.326806 containerd[1541]: 2025-07-07 06:05:06.320 [WARNING][5343] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" HandleID="k8s-pod-network.4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" Workload="localhost-k8s-calico--kube--controllers--56d994cf89--m95g4-eth0" Jul 7 06:05:06.326806 containerd[1541]: 2025-07-07 06:05:06.320 [INFO][5343] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" HandleID="k8s-pod-network.4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" Workload="localhost-k8s-calico--kube--controllers--56d994cf89--m95g4-eth0" Jul 7 06:05:06.326806 containerd[1541]: 2025-07-07 06:05:06.322 [INFO][5343] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:06.326806 containerd[1541]: 2025-07-07 06:05:06.324 [INFO][5330] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" Jul 7 06:05:06.327299 containerd[1541]: time="2025-07-07T06:05:06.327062217Z" level=info msg="TearDown network for sandbox \"4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c\" successfully" Jul 7 06:05:06.327299 containerd[1541]: time="2025-07-07T06:05:06.327095140Z" level=info msg="StopPodSandbox for \"4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c\" returns successfully" Jul 7 06:05:06.329232 containerd[1541]: time="2025-07-07T06:05:06.329133416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56d994cf89-m95g4,Uid:35fcaab8-d9c9-4023-a99b-c15aee365e80,Namespace:calico-system,Attempt:1,}" Jul 7 06:05:06.332489 systemd[1]: run-netns-cni\x2d61aa791e\x2d3fc5\x2d8ffa\x2d00e0\x2d1ed11477662c.mount: Deactivated successfully. Jul 7 06:05:06.376963 systemd-networkd[1228]: caliedc47bd32fc: Gained IPv6LL Jul 7 06:05:06.388965 containerd[1541]: time="2025-07-07T06:05:06.388913553Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:06.391079 containerd[1541]: time="2025-07-07T06:05:06.391043716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 7 06:05:06.410181 containerd[1541]: time="2025-07-07T06:05:06.410124057Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.339736837s" Jul 7 06:05:06.410181 containerd[1541]: time="2025-07-07T06:05:06.410172821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 7 06:05:06.414240 containerd[1541]: time="2025-07-07T06:05:06.414199049Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:06.415167 containerd[1541]: time="2025-07-07T06:05:06.415136201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 06:05:06.418118 containerd[1541]: time="2025-07-07T06:05:06.417616591Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:06.419957 containerd[1541]: time="2025-07-07T06:05:06.419233194Z" level=info msg="CreateContainer within sandbox \"69845ff0c551e6a34030f7eebfee1d22397e9b5ef914204ba68170470fa11151\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 06:05:06.432658 kubelet[2634]: E0707 06:05:06.432324 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:06.432658 kubelet[2634]: I0707 06:05:06.432390 2634 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:05:06.446570 containerd[1541]: time="2025-07-07T06:05:06.446073129Z" level=info msg="CreateContainer within sandbox \"69845ff0c551e6a34030f7eebfee1d22397e9b5ef914204ba68170470fa11151\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"23cc59c9c7b94b28620ed0f30df7945a6ce958ac4e4088aa8edef9b8d1d6cd16\"" Jul 7 06:05:06.448131 containerd[1541]: time="2025-07-07T06:05:06.448070642Z" level=info msg="StartContainer for \"23cc59c9c7b94b28620ed0f30df7945a6ce958ac4e4088aa8edef9b8d1d6cd16\"" Jul 7 06:05:06.505127 systemd-networkd[1228]: cali98384e83d75: Link UP Jul 7 06:05:06.505969 systemd-networkd[1228]: cali98384e83d75: Gained carrier Jul 7 06:05:06.524380 containerd[1541]: 2025-07-07 06:05:06.386 [INFO][5350] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--56d994cf89--m95g4-eth0 calico-kube-controllers-56d994cf89- calico-system 35fcaab8-d9c9-4023-a99b-c15aee365e80 1098 0 2025-07-07 06:04:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:56d994cf89 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-56d994cf89-m95g4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali98384e83d75 [] [] }} ContainerID="72e05c8b6e6b2399f39d366c75f3d9eaf65c167f7be8af552804a3b314680d31" Namespace="calico-system" Pod="calico-kube-controllers-56d994cf89-m95g4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56d994cf89--m95g4-" Jul 7 06:05:06.524380 containerd[1541]: 2025-07-07 06:05:06.389 [INFO][5350] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="72e05c8b6e6b2399f39d366c75f3d9eaf65c167f7be8af552804a3b314680d31" Namespace="calico-system" Pod="calico-kube-controllers-56d994cf89-m95g4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56d994cf89--m95g4-eth0" Jul 7 06:05:06.524380 containerd[1541]: 2025-07-07 06:05:06.453 [INFO][5365] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="72e05c8b6e6b2399f39d366c75f3d9eaf65c167f7be8af552804a3b314680d31" HandleID="k8s-pod-network.72e05c8b6e6b2399f39d366c75f3d9eaf65c167f7be8af552804a3b314680d31" Workload="localhost-k8s-calico--kube--controllers--56d994cf89--m95g4-eth0" Jul 7 06:05:06.524380 containerd[1541]: 2025-07-07 06:05:06.457 [INFO][5365] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="72e05c8b6e6b2399f39d366c75f3d9eaf65c167f7be8af552804a3b314680d31" HandleID="k8s-pod-network.72e05c8b6e6b2399f39d366c75f3d9eaf65c167f7be8af552804a3b314680d31" Workload="localhost-k8s-calico--kube--controllers--56d994cf89--m95g4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d730), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-56d994cf89-m95g4", "timestamp":"2025-07-07 06:05:06.453680432 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:05:06.524380 containerd[1541]: 2025-07-07 06:05:06.457 [INFO][5365] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:06.524380 containerd[1541]: 2025-07-07 06:05:06.457 [INFO][5365] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:06.524380 containerd[1541]: 2025-07-07 06:05:06.457 [INFO][5365] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:05:06.524380 containerd[1541]: 2025-07-07 06:05:06.470 [INFO][5365] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.72e05c8b6e6b2399f39d366c75f3d9eaf65c167f7be8af552804a3b314680d31" host="localhost" Jul 7 06:05:06.524380 containerd[1541]: 2025-07-07 06:05:06.474 [INFO][5365] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:05:06.524380 containerd[1541]: 2025-07-07 06:05:06.478 [INFO][5365] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:05:06.524380 containerd[1541]: 2025-07-07 06:05:06.479 [INFO][5365] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:05:06.524380 containerd[1541]: 2025-07-07 06:05:06.482 [INFO][5365] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:05:06.524380 containerd[1541]: 2025-07-07 06:05:06.482 [INFO][5365] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.72e05c8b6e6b2399f39d366c75f3d9eaf65c167f7be8af552804a3b314680d31" host="localhost" Jul 7 06:05:06.524380 containerd[1541]: 2025-07-07 06:05:06.485 [INFO][5365] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.72e05c8b6e6b2399f39d366c75f3d9eaf65c167f7be8af552804a3b314680d31 Jul 7 06:05:06.524380 containerd[1541]: 2025-07-07 06:05:06.490 [INFO][5365] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.72e05c8b6e6b2399f39d366c75f3d9eaf65c167f7be8af552804a3b314680d31" host="localhost" Jul 7 06:05:06.524380 containerd[1541]: 2025-07-07 06:05:06.500 [INFO][5365] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.72e05c8b6e6b2399f39d366c75f3d9eaf65c167f7be8af552804a3b314680d31" host="localhost" Jul 7 06:05:06.524380 containerd[1541]: 2025-07-07 06:05:06.500 [INFO][5365] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.72e05c8b6e6b2399f39d366c75f3d9eaf65c167f7be8af552804a3b314680d31" host="localhost" Jul 7 06:05:06.524380 containerd[1541]: 2025-07-07 06:05:06.500 [INFO][5365] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:06.524380 containerd[1541]: 2025-07-07 06:05:06.500 [INFO][5365] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="72e05c8b6e6b2399f39d366c75f3d9eaf65c167f7be8af552804a3b314680d31" HandleID="k8s-pod-network.72e05c8b6e6b2399f39d366c75f3d9eaf65c167f7be8af552804a3b314680d31" Workload="localhost-k8s-calico--kube--controllers--56d994cf89--m95g4-eth0" Jul 7 06:05:06.525080 containerd[1541]: 2025-07-07 06:05:06.502 [INFO][5350] cni-plugin/k8s.go 418: Populated endpoint ContainerID="72e05c8b6e6b2399f39d366c75f3d9eaf65c167f7be8af552804a3b314680d31" Namespace="calico-system" Pod="calico-kube-controllers-56d994cf89-m95g4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56d994cf89--m95g4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--56d994cf89--m95g4-eth0", GenerateName:"calico-kube-controllers-56d994cf89-", Namespace:"calico-system", SelfLink:"", UID:"35fcaab8-d9c9-4023-a99b-c15aee365e80", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56d994cf89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-56d994cf89-m95g4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali98384e83d75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:06.525080 containerd[1541]: 2025-07-07 06:05:06.502 [INFO][5350] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="72e05c8b6e6b2399f39d366c75f3d9eaf65c167f7be8af552804a3b314680d31" Namespace="calico-system" Pod="calico-kube-controllers-56d994cf89-m95g4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56d994cf89--m95g4-eth0" Jul 7 06:05:06.525080 containerd[1541]: 2025-07-07 06:05:06.502 [INFO][5350] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali98384e83d75 ContainerID="72e05c8b6e6b2399f39d366c75f3d9eaf65c167f7be8af552804a3b314680d31" Namespace="calico-system" Pod="calico-kube-controllers-56d994cf89-m95g4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56d994cf89--m95g4-eth0" Jul 7 06:05:06.525080 containerd[1541]: 2025-07-07 06:05:06.505 [INFO][5350] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="72e05c8b6e6b2399f39d366c75f3d9eaf65c167f7be8af552804a3b314680d31" Namespace="calico-system" Pod="calico-kube-controllers-56d994cf89-m95g4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56d994cf89--m95g4-eth0" Jul 7 06:05:06.525080 containerd[1541]: 2025-07-07 06:05:06.506 [INFO][5350] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="72e05c8b6e6b2399f39d366c75f3d9eaf65c167f7be8af552804a3b314680d31" Namespace="calico-system" Pod="calico-kube-controllers-56d994cf89-m95g4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56d994cf89--m95g4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--56d994cf89--m95g4-eth0", GenerateName:"calico-kube-controllers-56d994cf89-", Namespace:"calico-system", SelfLink:"", UID:"35fcaab8-d9c9-4023-a99b-c15aee365e80", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56d994cf89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72e05c8b6e6b2399f39d366c75f3d9eaf65c167f7be8af552804a3b314680d31", Pod:"calico-kube-controllers-56d994cf89-m95g4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali98384e83d75", MAC:"1e:0c:67:7a:fd:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:06.525080 containerd[1541]: 2025-07-07 06:05:06.519 [INFO][5350] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="72e05c8b6e6b2399f39d366c75f3d9eaf65c167f7be8af552804a3b314680d31" Namespace="calico-system" Pod="calico-kube-controllers-56d994cf89-m95g4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--56d994cf89--m95g4-eth0" Jul 7 06:05:06.543687 containerd[1541]: time="2025-07-07T06:05:06.543107038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:05:06.543687 containerd[1541]: time="2025-07-07T06:05:06.543164283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:05:06.543687 containerd[1541]: time="2025-07-07T06:05:06.543180084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:05:06.543687 containerd[1541]: time="2025-07-07T06:05:06.543264210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:05:06.565407 containerd[1541]: time="2025-07-07T06:05:06.565272655Z" level=info msg="StartContainer for \"23cc59c9c7b94b28620ed0f30df7945a6ce958ac4e4088aa8edef9b8d1d6cd16\" returns successfully" Jul 7 06:05:06.573039 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:05:06.602481 containerd[1541]: time="2025-07-07T06:05:06.602444221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56d994cf89-m95g4,Uid:35fcaab8-d9c9-4023-a99b-c15aee365e80,Namespace:calico-system,Attempt:1,} returns sandbox id \"72e05c8b6e6b2399f39d366c75f3d9eaf65c167f7be8af552804a3b314680d31\"" Jul 7 06:05:06.629036 systemd-networkd[1228]: calia36527f8ac4: Gained IPv6LL Jul 7 06:05:07.839360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2308689203.mount: Deactivated successfully. Jul 7 06:05:08.175236 containerd[1541]: time="2025-07-07T06:05:08.175133685Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:08.176214 containerd[1541]: time="2025-07-07T06:05:08.176052112Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 7 06:05:08.176910 containerd[1541]: time="2025-07-07T06:05:08.176851530Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:08.179265 containerd[1541]: time="2025-07-07T06:05:08.179211102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:08.180351 containerd[1541]: time="2025-07-07T06:05:08.180309182Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 1.7651415s" Jul 7 06:05:08.180351 containerd[1541]: time="2025-07-07T06:05:08.180346745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 7 06:05:08.182950 containerd[1541]: time="2025-07-07T06:05:08.181752688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 06:05:08.184564 containerd[1541]: time="2025-07-07T06:05:08.184156623Z" level=info msg="CreateContainer within sandbox \"b0d301724555424347b0f2a3ce8c6b4f3144286a5245406c359429e322750344\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 06:05:08.201695 containerd[1541]: time="2025-07-07T06:05:08.201655979Z" level=info msg="CreateContainer within sandbox \"b0d301724555424347b0f2a3ce8c6b4f3144286a5245406c359429e322750344\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"cd6f306c49e86d93e40ecea48a29d48159f05c25891b0ebf9071914f9ab20c41\"" Jul 7 06:05:08.203940 containerd[1541]: time="2025-07-07T06:05:08.202489960Z" level=info msg="StartContainer for \"cd6f306c49e86d93e40ecea48a29d48159f05c25891b0ebf9071914f9ab20c41\"" Jul 7 06:05:08.267630 containerd[1541]: time="2025-07-07T06:05:08.264442438Z" level=info msg="StartContainer for \"cd6f306c49e86d93e40ecea48a29d48159f05c25891b0ebf9071914f9ab20c41\" returns successfully" Jul 7 06:05:08.549937 systemd-networkd[1228]: cali98384e83d75: Gained IPv6LL Jul 7 06:05:09.361905 containerd[1541]: time="2025-07-07T06:05:09.361847431Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:09.363405 containerd[1541]: time="2025-07-07T06:05:09.363252211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 7 06:05:09.364535 containerd[1541]: time="2025-07-07T06:05:09.364318287Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:09.369912 containerd[1541]: time="2025-07-07T06:05:09.369869043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:09.370587 containerd[1541]: time="2025-07-07T06:05:09.370554532Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.188772402s" Jul 7 06:05:09.370651 containerd[1541]: time="2025-07-07T06:05:09.370588654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 7 06:05:09.372443 containerd[1541]: time="2025-07-07T06:05:09.372397703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 06:05:09.375236 containerd[1541]: time="2025-07-07T06:05:09.375141819Z" level=info msg="CreateContainer within sandbox \"69845ff0c551e6a34030f7eebfee1d22397e9b5ef914204ba68170470fa11151\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 06:05:09.399903 containerd[1541]: time="2025-07-07T06:05:09.399765534Z" level=info msg="CreateContainer within sandbox \"69845ff0c551e6a34030f7eebfee1d22397e9b5ef914204ba68170470fa11151\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"dfcef68c7b7bce557cfe9109d4faf204e74ce97fd9fd69b6ef36c343277eff65\"" Jul 7 06:05:09.401213 containerd[1541]: time="2025-07-07T06:05:09.401179915Z" level=info msg="StartContainer for \"dfcef68c7b7bce557cfe9109d4faf204e74ce97fd9fd69b6ef36c343277eff65\"" Jul 7 06:05:09.475861 containerd[1541]: time="2025-07-07T06:05:09.475815074Z" level=info msg="StartContainer for \"dfcef68c7b7bce557cfe9109d4faf204e74ce97fd9fd69b6ef36c343277eff65\" returns successfully" Jul 7 06:05:10.213139 systemd[1]: Started sshd@8-10.0.0.84:22-10.0.0.1:48776.service - OpenSSH per-connection server daemon (10.0.0.1:48776). Jul 7 06:05:10.259871 sshd[5604]: Accepted publickey for core from 10.0.0.1 port 48776 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:05:10.261318 sshd[5604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:10.265099 systemd-logind[1522]: New session 9 of user core. Jul 7 06:05:10.274389 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 06:05:10.295378 kubelet[2634]: I0707 06:05:10.295336 2634 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 06:05:10.299840 kubelet[2634]: I0707 06:05:10.299818 2634 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 06:05:10.504362 kubelet[2634]: I0707 06:05:10.503923 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4crgb" podStartSLOduration=22.941380453 podStartE2EDuration="27.503824168s" podCreationTimestamp="2025-07-07 06:04:43 +0000 UTC" firstStartedPulling="2025-07-07 06:05:04.809818538 +0000 UTC m=+40.677671641" lastFinishedPulling="2025-07-07 06:05:09.372262253 +0000 UTC m=+45.240115356" observedRunningTime="2025-07-07 06:05:10.503793166 +0000 UTC m=+46.371646269" watchObservedRunningTime="2025-07-07 06:05:10.503824168 +0000 UTC m=+46.371677271" Jul 7 06:05:10.505746 kubelet[2634]: I0707 06:05:10.505535 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-qv4x5" podStartSLOduration=24.156687318 podStartE2EDuration="27.505522927s" podCreationTimestamp="2025-07-07 06:04:43 +0000 UTC" firstStartedPulling="2025-07-07 06:05:04.832481727 +0000 UTC m=+40.700334830" lastFinishedPulling="2025-07-07 06:05:08.181317336 +0000 UTC m=+44.049170439" observedRunningTime="2025-07-07 06:05:08.467776706 +0000 UTC m=+44.335629809" watchObservedRunningTime="2025-07-07 06:05:10.505522927 +0000 UTC m=+46.373376030" Jul 7 06:05:10.531835 systemd[1]: run-containerd-runc-k8s.io-cd6f306c49e86d93e40ecea48a29d48159f05c25891b0ebf9071914f9ab20c41-runc.JwbgBq.mount: Deactivated successfully. Jul 7 06:05:10.535835 sshd[5604]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:10.545801 systemd[1]: sshd@8-10.0.0.84:22-10.0.0.1:48776.service: Deactivated successfully. Jul 7 06:05:10.553570 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 06:05:10.555000 systemd-logind[1522]: Session 9 logged out. Waiting for processes to exit. Jul 7 06:05:10.557454 systemd-logind[1522]: Removed session 9. Jul 7 06:05:11.251925 containerd[1541]: time="2025-07-07T06:05:11.251824802Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:11.252689 containerd[1541]: time="2025-07-07T06:05:11.252664499Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 7 06:05:11.253846 containerd[1541]: time="2025-07-07T06:05:11.253598203Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:11.256009 containerd[1541]: time="2025-07-07T06:05:11.255971965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:11.256904 containerd[1541]: time="2025-07-07T06:05:11.256824623Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 1.884388877s" Jul 7 06:05:11.256904 containerd[1541]: time="2025-07-07T06:05:11.256857825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 7 06:05:11.270098 containerd[1541]: time="2025-07-07T06:05:11.270058647Z" level=info msg="CreateContainer within sandbox \"72e05c8b6e6b2399f39d366c75f3d9eaf65c167f7be8af552804a3b314680d31\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 06:05:11.281213 containerd[1541]: time="2025-07-07T06:05:11.281156124Z" level=info msg="CreateContainer within sandbox \"72e05c8b6e6b2399f39d366c75f3d9eaf65c167f7be8af552804a3b314680d31\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"301e7db22dc91913e57abd797dfd0ca56cbf448783eb5b509ab2d01d242fdcf2\"" Jul 7 06:05:11.282866 containerd[1541]: time="2025-07-07T06:05:11.282833839Z" level=info msg="StartContainer for \"301e7db22dc91913e57abd797dfd0ca56cbf448783eb5b509ab2d01d242fdcf2\"" Jul 7 06:05:11.369919 containerd[1541]: time="2025-07-07T06:05:11.368100381Z" level=info msg="StartContainer for \"301e7db22dc91913e57abd797dfd0ca56cbf448783eb5b509ab2d01d242fdcf2\" returns successfully" Jul 7 06:05:11.510552 kubelet[2634]: I0707 06:05:11.510491 2634 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-56d994cf89-m95g4" podStartSLOduration=23.856545261 podStartE2EDuration="28.510473302s" podCreationTimestamp="2025-07-07 06:04:43 +0000 UTC" firstStartedPulling="2025-07-07 06:05:06.603583469 +0000 UTC m=+42.471436572" lastFinishedPulling="2025-07-07 06:05:11.25751151 +0000 UTC m=+47.125364613" observedRunningTime="2025-07-07 06:05:11.510201683 +0000 UTC m=+47.378054786" watchObservedRunningTime="2025-07-07 06:05:11.510473302 +0000 UTC m=+47.378326365" Jul 7 06:05:15.539128 systemd[1]: Started sshd@9-10.0.0.84:22-10.0.0.1:56604.service - OpenSSH per-connection server daemon (10.0.0.1:56604). Jul 7 06:05:15.583114 sshd[5744]: Accepted publickey for core from 10.0.0.1 port 56604 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:05:15.584695 sshd[5744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:15.589853 systemd-logind[1522]: New session 10 of user core. Jul 7 06:05:15.599119 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 06:05:15.833362 sshd[5744]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:15.847134 systemd[1]: Started sshd@10-10.0.0.84:22-10.0.0.1:56614.service - OpenSSH per-connection server daemon (10.0.0.1:56614). Jul 7 06:05:15.847522 systemd[1]: sshd@9-10.0.0.84:22-10.0.0.1:56604.service: Deactivated successfully. Jul 7 06:05:15.850289 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 06:05:15.853253 systemd-logind[1522]: Session 10 logged out. Waiting for processes to exit. Jul 7 06:05:15.854660 systemd-logind[1522]: Removed session 10. Jul 7 06:05:15.881575 sshd[5757]: Accepted publickey for core from 10.0.0.1 port 56614 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:05:15.882831 sshd[5757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:15.886783 systemd-logind[1522]: New session 11 of user core. Jul 7 06:05:15.899212 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 06:05:16.146378 sshd[5757]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:16.156163 systemd[1]: Started sshd@11-10.0.0.84:22-10.0.0.1:56624.service - OpenSSH per-connection server daemon (10.0.0.1:56624). Jul 7 06:05:16.157387 systemd[1]: sshd@10-10.0.0.84:22-10.0.0.1:56614.service: Deactivated successfully. Jul 7 06:05:16.167840 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 06:05:16.167878 systemd-logind[1522]: Session 11 logged out. Waiting for processes to exit. Jul 7 06:05:16.170916 systemd-logind[1522]: Removed session 11. Jul 7 06:05:16.206002 sshd[5770]: Accepted publickey for core from 10.0.0.1 port 56624 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:05:16.207382 sshd[5770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:16.211962 systemd-logind[1522]: New session 12 of user core. Jul 7 06:05:16.217173 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 06:05:16.359150 sshd[5770]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:16.362430 systemd[1]: sshd@11-10.0.0.84:22-10.0.0.1:56624.service: Deactivated successfully. Jul 7 06:05:16.364648 systemd-logind[1522]: Session 12 logged out. Waiting for processes to exit. Jul 7 06:05:16.364703 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 06:05:16.365902 systemd-logind[1522]: Removed session 12. Jul 7 06:05:19.327826 kubelet[2634]: I0707 06:05:19.327470 2634 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:05:21.369140 systemd[1]: Started sshd@12-10.0.0.84:22-10.0.0.1:56628.service - OpenSSH per-connection server daemon (10.0.0.1:56628). Jul 7 06:05:21.404778 sshd[5797]: Accepted publickey for core from 10.0.0.1 port 56628 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:05:21.404765 sshd[5797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:21.411763 systemd-logind[1522]: New session 13 of user core. Jul 7 06:05:21.420223 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 06:05:21.550079 sshd[5797]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:21.553950 systemd[1]: sshd@12-10.0.0.84:22-10.0.0.1:56628.service: Deactivated successfully. Jul 7 06:05:21.556262 systemd-logind[1522]: Session 13 logged out. Waiting for processes to exit. Jul 7 06:05:21.556584 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 06:05:21.557708 systemd-logind[1522]: Removed session 13. Jul 7 06:05:24.203485 containerd[1541]: time="2025-07-07T06:05:24.203324522Z" level=info msg="StopPodSandbox for \"4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c\"" Jul 7 06:05:24.284773 containerd[1541]: 2025-07-07 06:05:24.249 [WARNING][5828] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--56d994cf89--m95g4-eth0", GenerateName:"calico-kube-controllers-56d994cf89-", Namespace:"calico-system", SelfLink:"", UID:"35fcaab8-d9c9-4023-a99b-c15aee365e80", ResourceVersion:"1170", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56d994cf89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72e05c8b6e6b2399f39d366c75f3d9eaf65c167f7be8af552804a3b314680d31", Pod:"calico-kube-controllers-56d994cf89-m95g4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali98384e83d75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:24.284773 containerd[1541]: 2025-07-07 06:05:24.250 [INFO][5828] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" Jul 7 06:05:24.284773 containerd[1541]: 2025-07-07 06:05:24.250 [INFO][5828] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" iface="eth0" netns="" Jul 7 06:05:24.284773 containerd[1541]: 2025-07-07 06:05:24.250 [INFO][5828] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" Jul 7 06:05:24.284773 containerd[1541]: 2025-07-07 06:05:24.250 [INFO][5828] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" Jul 7 06:05:24.284773 containerd[1541]: 2025-07-07 06:05:24.271 [INFO][5838] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" HandleID="k8s-pod-network.4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" Workload="localhost-k8s-calico--kube--controllers--56d994cf89--m95g4-eth0" Jul 7 06:05:24.284773 containerd[1541]: 2025-07-07 06:05:24.271 [INFO][5838] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:24.284773 containerd[1541]: 2025-07-07 06:05:24.271 [INFO][5838] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:24.284773 containerd[1541]: 2025-07-07 06:05:24.279 [WARNING][5838] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" HandleID="k8s-pod-network.4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" Workload="localhost-k8s-calico--kube--controllers--56d994cf89--m95g4-eth0" Jul 7 06:05:24.284773 containerd[1541]: 2025-07-07 06:05:24.279 [INFO][5838] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" HandleID="k8s-pod-network.4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" Workload="localhost-k8s-calico--kube--controllers--56d994cf89--m95g4-eth0" Jul 7 06:05:24.284773 containerd[1541]: 2025-07-07 06:05:24.281 [INFO][5838] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:24.284773 containerd[1541]: 2025-07-07 06:05:24.283 [INFO][5828] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" Jul 7 06:05:24.285205 containerd[1541]: time="2025-07-07T06:05:24.284814320Z" level=info msg="TearDown network for sandbox \"4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c\" successfully" Jul 7 06:05:24.285205 containerd[1541]: time="2025-07-07T06:05:24.284838881Z" level=info msg="StopPodSandbox for \"4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c\" returns successfully" Jul 7 06:05:24.285346 containerd[1541]: time="2025-07-07T06:05:24.285268025Z" level=info msg="RemovePodSandbox for \"4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c\"" Jul 7 06:05:24.287619 containerd[1541]: time="2025-07-07T06:05:24.287582154Z" level=info msg="Forcibly stopping sandbox \"4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c\"" Jul 7 06:05:24.347391 containerd[1541]: 2025-07-07 06:05:24.318 [WARNING][5855] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--56d994cf89--m95g4-eth0", GenerateName:"calico-kube-controllers-56d994cf89-", Namespace:"calico-system", SelfLink:"", UID:"35fcaab8-d9c9-4023-a99b-c15aee365e80", ResourceVersion:"1170", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56d994cf89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72e05c8b6e6b2399f39d366c75f3d9eaf65c167f7be8af552804a3b314680d31", Pod:"calico-kube-controllers-56d994cf89-m95g4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali98384e83d75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:24.347391 containerd[1541]: 2025-07-07 06:05:24.318 [INFO][5855] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" Jul 7 06:05:24.347391 containerd[1541]: 2025-07-07 06:05:24.318 [INFO][5855] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" iface="eth0" netns="" Jul 7 06:05:24.347391 containerd[1541]: 2025-07-07 06:05:24.318 [INFO][5855] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" Jul 7 06:05:24.347391 containerd[1541]: 2025-07-07 06:05:24.318 [INFO][5855] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" Jul 7 06:05:24.347391 containerd[1541]: 2025-07-07 06:05:24.335 [INFO][5864] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" HandleID="k8s-pod-network.4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" Workload="localhost-k8s-calico--kube--controllers--56d994cf89--m95g4-eth0" Jul 7 06:05:24.347391 containerd[1541]: 2025-07-07 06:05:24.335 [INFO][5864] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:24.347391 containerd[1541]: 2025-07-07 06:05:24.335 [INFO][5864] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:24.347391 containerd[1541]: 2025-07-07 06:05:24.343 [WARNING][5864] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" HandleID="k8s-pod-network.4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" Workload="localhost-k8s-calico--kube--controllers--56d994cf89--m95g4-eth0" Jul 7 06:05:24.347391 containerd[1541]: 2025-07-07 06:05:24.343 [INFO][5864] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" HandleID="k8s-pod-network.4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" Workload="localhost-k8s-calico--kube--controllers--56d994cf89--m95g4-eth0" Jul 7 06:05:24.347391 containerd[1541]: 2025-07-07 06:05:24.344 [INFO][5864] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:24.347391 containerd[1541]: 2025-07-07 06:05:24.345 [INFO][5855] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c" Jul 7 06:05:24.347391 containerd[1541]: time="2025-07-07T06:05:24.347348017Z" level=info msg="TearDown network for sandbox \"4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c\" successfully" Jul 7 06:05:24.362751 containerd[1541]: time="2025-07-07T06:05:24.351663058Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:05:24.362874 containerd[1541]: time="2025-07-07T06:05:24.362791281Z" level=info msg="RemovePodSandbox \"4c3919200182f6ad5d4098e0d01a25ab4c7c096f78a8a79fed70a7943930393c\" returns successfully" Jul 7 06:05:24.363406 containerd[1541]: time="2025-07-07T06:05:24.363376873Z" level=info msg="StopPodSandbox for \"4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966\"" Jul 7 06:05:24.424476 containerd[1541]: 2025-07-07 06:05:24.394 [WARNING][5881] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55f9545c55--rznvr-eth0", GenerateName:"calico-apiserver-55f9545c55-", Namespace:"calico-apiserver", SelfLink:"", UID:"9bc4bd25-cee7-4295-98da-f48b4661ae1c", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55f9545c55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"11835eecd6ecc34027ea5a828c0b98744ca31dd743227a1cd41701e289ce98be", Pod:"calico-apiserver-55f9545c55-rznvr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali66734f2ca06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:24.424476 containerd[1541]: 2025-07-07 06:05:24.395 [INFO][5881] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" Jul 7 06:05:24.424476 containerd[1541]: 2025-07-07 06:05:24.395 [INFO][5881] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" iface="eth0" netns="" Jul 7 06:05:24.424476 containerd[1541]: 2025-07-07 06:05:24.395 [INFO][5881] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" Jul 7 06:05:24.424476 containerd[1541]: 2025-07-07 06:05:24.395 [INFO][5881] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" Jul 7 06:05:24.424476 containerd[1541]: 2025-07-07 06:05:24.412 [INFO][5890] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" HandleID="k8s-pod-network.4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" Workload="localhost-k8s-calico--apiserver--55f9545c55--rznvr-eth0" Jul 7 06:05:24.424476 containerd[1541]: 2025-07-07 06:05:24.412 [INFO][5890] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:24.424476 containerd[1541]: 2025-07-07 06:05:24.412 [INFO][5890] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:24.424476 containerd[1541]: 2025-07-07 06:05:24.420 [WARNING][5890] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" HandleID="k8s-pod-network.4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" Workload="localhost-k8s-calico--apiserver--55f9545c55--rznvr-eth0" Jul 7 06:05:24.424476 containerd[1541]: 2025-07-07 06:05:24.420 [INFO][5890] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" HandleID="k8s-pod-network.4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" Workload="localhost-k8s-calico--apiserver--55f9545c55--rznvr-eth0" Jul 7 06:05:24.424476 containerd[1541]: 2025-07-07 06:05:24.421 [INFO][5890] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:24.424476 containerd[1541]: 2025-07-07 06:05:24.422 [INFO][5881] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" Jul 7 06:05:24.424992 containerd[1541]: time="2025-07-07T06:05:24.424520693Z" level=info msg="TearDown network for sandbox \"4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966\" successfully" Jul 7 06:05:24.424992 containerd[1541]: time="2025-07-07T06:05:24.424544574Z" level=info msg="StopPodSandbox for \"4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966\" returns successfully" Jul 7 06:05:24.425790 containerd[1541]: time="2025-07-07T06:05:24.425510388Z" level=info msg="RemovePodSandbox for \"4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966\"" Jul 7 06:05:24.425790 containerd[1541]: time="2025-07-07T06:05:24.425542590Z" level=info msg="Forcibly stopping sandbox \"4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966\"" Jul 7 06:05:24.489396 containerd[1541]: 2025-07-07 06:05:24.458 [WARNING][5909] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55f9545c55--rznvr-eth0", GenerateName:"calico-apiserver-55f9545c55-", Namespace:"calico-apiserver", SelfLink:"", UID:"9bc4bd25-cee7-4295-98da-f48b4661ae1c", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55f9545c55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"11835eecd6ecc34027ea5a828c0b98744ca31dd743227a1cd41701e289ce98be", Pod:"calico-apiserver-55f9545c55-rznvr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali66734f2ca06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:24.489396 containerd[1541]: 2025-07-07 06:05:24.458 [INFO][5909] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" Jul 7 06:05:24.489396 containerd[1541]: 2025-07-07 06:05:24.458 [INFO][5909] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" iface="eth0" netns="" Jul 7 06:05:24.489396 containerd[1541]: 2025-07-07 06:05:24.458 [INFO][5909] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" Jul 7 06:05:24.489396 containerd[1541]: 2025-07-07 06:05:24.458 [INFO][5909] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" Jul 7 06:05:24.489396 containerd[1541]: 2025-07-07 06:05:24.475 [INFO][5917] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" HandleID="k8s-pod-network.4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" Workload="localhost-k8s-calico--apiserver--55f9545c55--rznvr-eth0" Jul 7 06:05:24.489396 containerd[1541]: 2025-07-07 06:05:24.475 [INFO][5917] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:24.489396 containerd[1541]: 2025-07-07 06:05:24.475 [INFO][5917] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:24.489396 containerd[1541]: 2025-07-07 06:05:24.484 [WARNING][5917] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" HandleID="k8s-pod-network.4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" Workload="localhost-k8s-calico--apiserver--55f9545c55--rznvr-eth0" Jul 7 06:05:24.489396 containerd[1541]: 2025-07-07 06:05:24.484 [INFO][5917] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" HandleID="k8s-pod-network.4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" Workload="localhost-k8s-calico--apiserver--55f9545c55--rznvr-eth0" Jul 7 06:05:24.489396 containerd[1541]: 2025-07-07 06:05:24.486 [INFO][5917] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:24.489396 containerd[1541]: 2025-07-07 06:05:24.487 [INFO][5909] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966" Jul 7 06:05:24.489396 containerd[1541]: time="2025-07-07T06:05:24.489364559Z" level=info msg="TearDown network for sandbox \"4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966\" successfully" Jul 7 06:05:24.499370 containerd[1541]: time="2025-07-07T06:05:24.499333797Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:05:24.499427 containerd[1541]: time="2025-07-07T06:05:24.499406281Z" level=info msg="RemovePodSandbox \"4617debcb0a1eee31bae4bc697a78e3b3b1983e699909c6c8058fef902a84966\" returns successfully" Jul 7 06:05:24.499834 containerd[1541]: time="2025-07-07T06:05:24.499806743Z" level=info msg="StopPodSandbox for \"246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0\"" Jul 7 06:05:24.584178 containerd[1541]: 2025-07-07 06:05:24.533 [WARNING][5935] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55f9545c55--sb8ps-eth0", GenerateName:"calico-apiserver-55f9545c55-", Namespace:"calico-apiserver", SelfLink:"", UID:"f43cdfb4-d450-43db-b66c-b3c84cc5b1e9", ResourceVersion:"1229", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55f9545c55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d5b584dd84b1f3a3330dcbe3de52911aa90f348515c9a95b281d7aed564c1be", Pod:"calico-apiserver-55f9545c55-sb8ps", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif32de6edb66", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:24.584178 containerd[1541]: 2025-07-07 06:05:24.534 [INFO][5935] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" Jul 7 06:05:24.584178 containerd[1541]: 2025-07-07 06:05:24.534 [INFO][5935] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" iface="eth0" netns="" Jul 7 06:05:24.584178 containerd[1541]: 2025-07-07 06:05:24.534 [INFO][5935] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" Jul 7 06:05:24.584178 containerd[1541]: 2025-07-07 06:05:24.534 [INFO][5935] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" Jul 7 06:05:24.584178 containerd[1541]: 2025-07-07 06:05:24.565 [INFO][5944] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" HandleID="k8s-pod-network.246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" Workload="localhost-k8s-calico--apiserver--55f9545c55--sb8ps-eth0" Jul 7 06:05:24.584178 containerd[1541]: 2025-07-07 06:05:24.566 [INFO][5944] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:24.584178 containerd[1541]: 2025-07-07 06:05:24.566 [INFO][5944] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:24.584178 containerd[1541]: 2025-07-07 06:05:24.576 [WARNING][5944] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" HandleID="k8s-pod-network.246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" Workload="localhost-k8s-calico--apiserver--55f9545c55--sb8ps-eth0" Jul 7 06:05:24.584178 containerd[1541]: 2025-07-07 06:05:24.577 [INFO][5944] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" HandleID="k8s-pod-network.246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" Workload="localhost-k8s-calico--apiserver--55f9545c55--sb8ps-eth0" Jul 7 06:05:24.584178 containerd[1541]: 2025-07-07 06:05:24.578 [INFO][5944] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:24.584178 containerd[1541]: 2025-07-07 06:05:24.580 [INFO][5935] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" Jul 7 06:05:24.584178 containerd[1541]: time="2025-07-07T06:05:24.584020293Z" level=info msg="TearDown network for sandbox \"246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0\" successfully" Jul 7 06:05:24.584178 containerd[1541]: time="2025-07-07T06:05:24.584042494Z" level=info msg="StopPodSandbox for \"246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0\" returns successfully" Jul 7 06:05:24.585085 containerd[1541]: time="2025-07-07T06:05:24.584422476Z" level=info msg="RemovePodSandbox for \"246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0\"" Jul 7 06:05:24.585085 containerd[1541]: time="2025-07-07T06:05:24.584450837Z" level=info msg="Forcibly stopping sandbox \"246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0\"" Jul 7 06:05:24.648095 containerd[1541]: 2025-07-07 06:05:24.615 [WARNING][5963] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--55f9545c55--sb8ps-eth0", GenerateName:"calico-apiserver-55f9545c55-", Namespace:"calico-apiserver", SelfLink:"", UID:"f43cdfb4-d450-43db-b66c-b3c84cc5b1e9", ResourceVersion:"1229", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55f9545c55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d5b584dd84b1f3a3330dcbe3de52911aa90f348515c9a95b281d7aed564c1be", Pod:"calico-apiserver-55f9545c55-sb8ps", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif32de6edb66", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:24.648095 containerd[1541]: 2025-07-07 06:05:24.615 [INFO][5963] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" Jul 7 06:05:24.648095 containerd[1541]: 2025-07-07 06:05:24.615 [INFO][5963] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" iface="eth0" netns="" Jul 7 06:05:24.648095 containerd[1541]: 2025-07-07 06:05:24.615 [INFO][5963] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" Jul 7 06:05:24.648095 containerd[1541]: 2025-07-07 06:05:24.615 [INFO][5963] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" Jul 7 06:05:24.648095 containerd[1541]: 2025-07-07 06:05:24.634 [INFO][5972] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" HandleID="k8s-pod-network.246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" Workload="localhost-k8s-calico--apiserver--55f9545c55--sb8ps-eth0" Jul 7 06:05:24.648095 containerd[1541]: 2025-07-07 06:05:24.634 [INFO][5972] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:24.648095 containerd[1541]: 2025-07-07 06:05:24.634 [INFO][5972] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:24.648095 containerd[1541]: 2025-07-07 06:05:24.643 [WARNING][5972] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" HandleID="k8s-pod-network.246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" Workload="localhost-k8s-calico--apiserver--55f9545c55--sb8ps-eth0" Jul 7 06:05:24.648095 containerd[1541]: 2025-07-07 06:05:24.643 [INFO][5972] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" HandleID="k8s-pod-network.246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" Workload="localhost-k8s-calico--apiserver--55f9545c55--sb8ps-eth0" Jul 7 06:05:24.648095 containerd[1541]: 2025-07-07 06:05:24.644 [INFO][5972] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:24.648095 containerd[1541]: 2025-07-07 06:05:24.646 [INFO][5963] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0" Jul 7 06:05:24.648540 containerd[1541]: time="2025-07-07T06:05:24.648141039Z" level=info msg="TearDown network for sandbox \"246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0\" successfully" Jul 7 06:05:24.655436 containerd[1541]: time="2025-07-07T06:05:24.655397885Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:05:24.655703 containerd[1541]: time="2025-07-07T06:05:24.655609137Z" level=info msg="RemovePodSandbox \"246608de5768f1610ec60d3d45caf6272e2a60d7f5aef640460b88d857fbb8f0\" returns successfully" Jul 7 06:05:24.656271 containerd[1541]: time="2025-07-07T06:05:24.656008159Z" level=info msg="StopPodSandbox for \"133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954\"" Jul 7 06:05:24.730580 containerd[1541]: 2025-07-07 06:05:24.689 [WARNING][5991] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--dxs49-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c1202884-ce77-4d20-a14f-e7b1c46573d9", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0d060eabaf495521f5854d9ee02cd8aeb5b4b8431c9b092a0531117af0048dac", Pod:"coredns-7c65d6cfc9-dxs49", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7950cac01f9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:24.730580 containerd[1541]: 2025-07-07 06:05:24.690 [INFO][5991] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" Jul 7 06:05:24.730580 containerd[1541]: 2025-07-07 06:05:24.690 [INFO][5991] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" iface="eth0" netns="" Jul 7 06:05:24.730580 containerd[1541]: 2025-07-07 06:05:24.690 [INFO][5991] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" Jul 7 06:05:24.730580 containerd[1541]: 2025-07-07 06:05:24.690 [INFO][5991] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" Jul 7 06:05:24.730580 containerd[1541]: 2025-07-07 06:05:24.708 [INFO][6000] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" HandleID="k8s-pod-network.133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" Workload="localhost-k8s-coredns--7c65d6cfc9--dxs49-eth0" Jul 7 06:05:24.730580 containerd[1541]: 2025-07-07 06:05:24.708 [INFO][6000] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:24.730580 containerd[1541]: 2025-07-07 06:05:24.708 [INFO][6000] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:24.730580 containerd[1541]: 2025-07-07 06:05:24.719 [WARNING][6000] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" HandleID="k8s-pod-network.133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" Workload="localhost-k8s-coredns--7c65d6cfc9--dxs49-eth0" Jul 7 06:05:24.730580 containerd[1541]: 2025-07-07 06:05:24.719 [INFO][6000] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" HandleID="k8s-pod-network.133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" Workload="localhost-k8s-coredns--7c65d6cfc9--dxs49-eth0" Jul 7 06:05:24.730580 containerd[1541]: 2025-07-07 06:05:24.722 [INFO][6000] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:24.730580 containerd[1541]: 2025-07-07 06:05:24.727 [INFO][5991] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" Jul 7 06:05:24.731258 containerd[1541]: time="2025-07-07T06:05:24.730981272Z" level=info msg="TearDown network for sandbox \"133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954\" successfully" Jul 7 06:05:24.731258 containerd[1541]: time="2025-07-07T06:05:24.731009314Z" level=info msg="StopPodSandbox for \"133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954\" returns successfully" Jul 7 06:05:24.731950 containerd[1541]: time="2025-07-07T06:05:24.731449378Z" level=info msg="RemovePodSandbox for \"133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954\"" Jul 7 06:05:24.732119 containerd[1541]: time="2025-07-07T06:05:24.732094215Z" level=info msg="Forcibly stopping sandbox \"133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954\"" Jul 7 06:05:24.830516 containerd[1541]: 2025-07-07 06:05:24.798 [WARNING][6017] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--dxs49-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c1202884-ce77-4d20-a14f-e7b1c46573d9", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0d060eabaf495521f5854d9ee02cd8aeb5b4b8431c9b092a0531117af0048dac", Pod:"coredns-7c65d6cfc9-dxs49", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7950cac01f9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:24.830516 containerd[1541]: 2025-07-07 06:05:24.799 [INFO][6017] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" Jul 7 06:05:24.830516 containerd[1541]: 2025-07-07 06:05:24.799 [INFO][6017] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" iface="eth0" netns="" Jul 7 06:05:24.830516 containerd[1541]: 2025-07-07 06:05:24.799 [INFO][6017] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" Jul 7 06:05:24.830516 containerd[1541]: 2025-07-07 06:05:24.799 [INFO][6017] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" Jul 7 06:05:24.830516 containerd[1541]: 2025-07-07 06:05:24.817 [INFO][6025] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" HandleID="k8s-pod-network.133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" Workload="localhost-k8s-coredns--7c65d6cfc9--dxs49-eth0" Jul 7 06:05:24.830516 containerd[1541]: 2025-07-07 06:05:24.817 [INFO][6025] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:24.830516 containerd[1541]: 2025-07-07 06:05:24.817 [INFO][6025] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:24.830516 containerd[1541]: 2025-07-07 06:05:24.825 [WARNING][6025] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" HandleID="k8s-pod-network.133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" Workload="localhost-k8s-coredns--7c65d6cfc9--dxs49-eth0" Jul 7 06:05:24.830516 containerd[1541]: 2025-07-07 06:05:24.825 [INFO][6025] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" HandleID="k8s-pod-network.133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" Workload="localhost-k8s-coredns--7c65d6cfc9--dxs49-eth0" Jul 7 06:05:24.830516 containerd[1541]: 2025-07-07 06:05:24.826 [INFO][6025] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:24.830516 containerd[1541]: 2025-07-07 06:05:24.828 [INFO][6017] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954" Jul 7 06:05:24.831650 containerd[1541]: time="2025-07-07T06:05:24.830969344Z" level=info msg="TearDown network for sandbox \"133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954\" successfully" Jul 7 06:05:24.834027 containerd[1541]: time="2025-07-07T06:05:24.833995714Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:05:24.834098 containerd[1541]: time="2025-07-07T06:05:24.834079398Z" level=info msg="RemovePodSandbox \"133a670e0a27525b5be34f608af2827039cb18dd187a10fdcc94c4a2165f9954\" returns successfully" Jul 7 06:05:24.834642 containerd[1541]: time="2025-07-07T06:05:24.834580826Z" level=info msg="StopPodSandbox for \"bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516\"" Jul 7 06:05:24.897984 containerd[1541]: 2025-07-07 06:05:24.865 [WARNING][6043] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4crgb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9daedc13-d72f-4853-892c-86b97bad3b56", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"69845ff0c551e6a34030f7eebfee1d22397e9b5ef914204ba68170470fa11151", Pod:"csi-node-driver-4crgb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliedc47bd32fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:24.897984 containerd[1541]: 2025-07-07 06:05:24.865 [INFO][6043] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" Jul 7 06:05:24.897984 containerd[1541]: 2025-07-07 06:05:24.865 [INFO][6043] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" iface="eth0" netns="" Jul 7 06:05:24.897984 containerd[1541]: 2025-07-07 06:05:24.865 [INFO][6043] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" Jul 7 06:05:24.897984 containerd[1541]: 2025-07-07 06:05:24.865 [INFO][6043] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" Jul 7 06:05:24.897984 containerd[1541]: 2025-07-07 06:05:24.885 [INFO][6051] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" HandleID="k8s-pod-network.bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" Workload="localhost-k8s-csi--node--driver--4crgb-eth0" Jul 7 06:05:24.897984 containerd[1541]: 2025-07-07 06:05:24.885 [INFO][6051] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:24.897984 containerd[1541]: 2025-07-07 06:05:24.885 [INFO][6051] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:24.897984 containerd[1541]: 2025-07-07 06:05:24.893 [WARNING][6051] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" HandleID="k8s-pod-network.bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" Workload="localhost-k8s-csi--node--driver--4crgb-eth0" Jul 7 06:05:24.897984 containerd[1541]: 2025-07-07 06:05:24.893 [INFO][6051] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" HandleID="k8s-pod-network.bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" Workload="localhost-k8s-csi--node--driver--4crgb-eth0" Jul 7 06:05:24.897984 containerd[1541]: 2025-07-07 06:05:24.894 [INFO][6051] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:24.897984 containerd[1541]: 2025-07-07 06:05:24.896 [INFO][6043] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" Jul 7 06:05:24.898760 containerd[1541]: time="2025-07-07T06:05:24.898012814Z" level=info msg="TearDown network for sandbox \"bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516\" successfully" Jul 7 06:05:24.898760 containerd[1541]: time="2025-07-07T06:05:24.898037695Z" level=info msg="StopPodSandbox for \"bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516\" returns successfully" Jul 7 06:05:24.898760 containerd[1541]: time="2025-07-07T06:05:24.898574885Z" level=info msg="RemovePodSandbox for \"bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516\"" Jul 7 06:05:24.898760 containerd[1541]: time="2025-07-07T06:05:24.898610807Z" level=info msg="Forcibly stopping sandbox \"bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516\"" Jul 7 06:05:24.961657 containerd[1541]: 2025-07-07 06:05:24.930 [WARNING][6071] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4crgb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9daedc13-d72f-4853-892c-86b97bad3b56", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"69845ff0c551e6a34030f7eebfee1d22397e9b5ef914204ba68170470fa11151", Pod:"csi-node-driver-4crgb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliedc47bd32fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:24.961657 containerd[1541]: 2025-07-07 06:05:24.930 [INFO][6071] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" Jul 7 06:05:24.961657 containerd[1541]: 2025-07-07 06:05:24.930 [INFO][6071] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" iface="eth0" netns="" Jul 7 06:05:24.961657 containerd[1541]: 2025-07-07 06:05:24.930 [INFO][6071] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" Jul 7 06:05:24.961657 containerd[1541]: 2025-07-07 06:05:24.930 [INFO][6071] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" Jul 7 06:05:24.961657 containerd[1541]: 2025-07-07 06:05:24.948 [INFO][6080] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" HandleID="k8s-pod-network.bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" Workload="localhost-k8s-csi--node--driver--4crgb-eth0" Jul 7 06:05:24.961657 containerd[1541]: 2025-07-07 06:05:24.948 [INFO][6080] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:24.961657 containerd[1541]: 2025-07-07 06:05:24.948 [INFO][6080] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:24.961657 containerd[1541]: 2025-07-07 06:05:24.957 [WARNING][6080] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" HandleID="k8s-pod-network.bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" Workload="localhost-k8s-csi--node--driver--4crgb-eth0" Jul 7 06:05:24.961657 containerd[1541]: 2025-07-07 06:05:24.957 [INFO][6080] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" HandleID="k8s-pod-network.bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" Workload="localhost-k8s-csi--node--driver--4crgb-eth0" Jul 7 06:05:24.961657 containerd[1541]: 2025-07-07 06:05:24.958 [INFO][6080] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:24.961657 containerd[1541]: 2025-07-07 06:05:24.960 [INFO][6071] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516" Jul 7 06:05:24.961657 containerd[1541]: time="2025-07-07T06:05:24.961621571Z" level=info msg="TearDown network for sandbox \"bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516\" successfully" Jul 7 06:05:24.964733 containerd[1541]: time="2025-07-07T06:05:24.964701063Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:05:24.964810 containerd[1541]: time="2025-07-07T06:05:24.964760027Z" level=info msg="RemovePodSandbox \"bd5c2eba15684acb11af906113aa5d17203e7fb57a872081933ce94e607f4516\" returns successfully" Jul 7 06:05:24.965226 containerd[1541]: time="2025-07-07T06:05:24.965204932Z" level=info msg="StopPodSandbox for \"ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363\"" Jul 7 06:05:25.025806 containerd[1541]: 2025-07-07 06:05:24.995 [WARNING][6097] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" WorkloadEndpoint="localhost-k8s-whisker--855d6c9f76--dt7d4-eth0" Jul 7 06:05:25.025806 containerd[1541]: 2025-07-07 06:05:24.995 [INFO][6097] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" Jul 7 06:05:25.025806 containerd[1541]: 2025-07-07 06:05:24.995 [INFO][6097] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" iface="eth0" netns="" Jul 7 06:05:25.025806 containerd[1541]: 2025-07-07 06:05:24.995 [INFO][6097] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" Jul 7 06:05:25.025806 containerd[1541]: 2025-07-07 06:05:24.995 [INFO][6097] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" Jul 7 06:05:25.025806 containerd[1541]: 2025-07-07 06:05:25.013 [INFO][6106] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" HandleID="k8s-pod-network.ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" Workload="localhost-k8s-whisker--855d6c9f76--dt7d4-eth0" Jul 7 06:05:25.025806 containerd[1541]: 2025-07-07 06:05:25.013 [INFO][6106] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:25.025806 containerd[1541]: 2025-07-07 06:05:25.013 [INFO][6106] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:25.025806 containerd[1541]: 2025-07-07 06:05:25.021 [WARNING][6106] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" HandleID="k8s-pod-network.ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" Workload="localhost-k8s-whisker--855d6c9f76--dt7d4-eth0" Jul 7 06:05:25.025806 containerd[1541]: 2025-07-07 06:05:25.021 [INFO][6106] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" HandleID="k8s-pod-network.ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" Workload="localhost-k8s-whisker--855d6c9f76--dt7d4-eth0" Jul 7 06:05:25.025806 containerd[1541]: 2025-07-07 06:05:25.022 [INFO][6106] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:25.025806 containerd[1541]: 2025-07-07 06:05:25.024 [INFO][6097] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" Jul 7 06:05:25.025806 containerd[1541]: time="2025-07-07T06:05:25.025681019Z" level=info msg="TearDown network for sandbox \"ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363\" successfully" Jul 7 06:05:25.025806 containerd[1541]: time="2025-07-07T06:05:25.025705941Z" level=info msg="StopPodSandbox for \"ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363\" returns successfully" Jul 7 06:05:25.026349 containerd[1541]: time="2025-07-07T06:05:25.026277572Z" level=info msg="RemovePodSandbox for \"ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363\"" Jul 7 06:05:25.026349 containerd[1541]: time="2025-07-07T06:05:25.026311854Z" level=info msg="Forcibly stopping sandbox \"ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363\"" Jul 7 06:05:25.096433 containerd[1541]: 2025-07-07 06:05:25.065 [WARNING][6124] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" WorkloadEndpoint="localhost-k8s-whisker--855d6c9f76--dt7d4-eth0" Jul 7 06:05:25.096433 containerd[1541]: 2025-07-07 06:05:25.065 [INFO][6124] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" Jul 7 06:05:25.096433 containerd[1541]: 2025-07-07 06:05:25.065 [INFO][6124] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" iface="eth0" netns="" Jul 7 06:05:25.096433 containerd[1541]: 2025-07-07 06:05:25.065 [INFO][6124] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" Jul 7 06:05:25.096433 containerd[1541]: 2025-07-07 06:05:25.065 [INFO][6124] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" Jul 7 06:05:25.096433 containerd[1541]: 2025-07-07 06:05:25.083 [INFO][6133] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" HandleID="k8s-pod-network.ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" Workload="localhost-k8s-whisker--855d6c9f76--dt7d4-eth0" Jul 7 06:05:25.096433 containerd[1541]: 2025-07-07 06:05:25.083 [INFO][6133] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:25.096433 containerd[1541]: 2025-07-07 06:05:25.083 [INFO][6133] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:25.096433 containerd[1541]: 2025-07-07 06:05:25.092 [WARNING][6133] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" HandleID="k8s-pod-network.ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" Workload="localhost-k8s-whisker--855d6c9f76--dt7d4-eth0" Jul 7 06:05:25.096433 containerd[1541]: 2025-07-07 06:05:25.092 [INFO][6133] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" HandleID="k8s-pod-network.ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" Workload="localhost-k8s-whisker--855d6c9f76--dt7d4-eth0" Jul 7 06:05:25.096433 containerd[1541]: 2025-07-07 06:05:25.093 [INFO][6133] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:25.096433 containerd[1541]: 2025-07-07 06:05:25.094 [INFO][6124] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363" Jul 7 06:05:25.096433 containerd[1541]: time="2025-07-07T06:05:25.096394933Z" level=info msg="TearDown network for sandbox \"ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363\" successfully" Jul 7 06:05:25.100844 containerd[1541]: time="2025-07-07T06:05:25.100779015Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:05:25.100939 containerd[1541]: time="2025-07-07T06:05:25.100846979Z" level=info msg="RemovePodSandbox \"ea4f84c541601d8e7ca3829245310e58613c191afd77263e96fdd29221f33363\" returns successfully" Jul 7 06:05:25.101349 containerd[1541]: time="2025-07-07T06:05:25.101303964Z" level=info msg="StopPodSandbox for \"b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0\"" Jul 7 06:05:25.166993 containerd[1541]: 2025-07-07 06:05:25.132 [WARNING][6151] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--qv4x5-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"3711a9de-1f07-4eac-8a8a-7a0a57ec740f", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b0d301724555424347b0f2a3ce8c6b4f3144286a5245406c359429e322750344", Pod:"goldmane-58fd7646b9-qv4x5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia36527f8ac4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:25.166993 containerd[1541]: 2025-07-07 06:05:25.132 [INFO][6151] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" Jul 7 06:05:25.166993 containerd[1541]: 2025-07-07 06:05:25.132 [INFO][6151] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" iface="eth0" netns="" Jul 7 06:05:25.166993 containerd[1541]: 2025-07-07 06:05:25.132 [INFO][6151] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" Jul 7 06:05:25.166993 containerd[1541]: 2025-07-07 06:05:25.132 [INFO][6151] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" Jul 7 06:05:25.166993 containerd[1541]: 2025-07-07 06:05:25.154 [INFO][6160] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" HandleID="k8s-pod-network.b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" Workload="localhost-k8s-goldmane--58fd7646b9--qv4x5-eth0" Jul 7 06:05:25.166993 containerd[1541]: 2025-07-07 06:05:25.154 [INFO][6160] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:25.166993 containerd[1541]: 2025-07-07 06:05:25.154 [INFO][6160] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:25.166993 containerd[1541]: 2025-07-07 06:05:25.162 [WARNING][6160] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" HandleID="k8s-pod-network.b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" Workload="localhost-k8s-goldmane--58fd7646b9--qv4x5-eth0" Jul 7 06:05:25.166993 containerd[1541]: 2025-07-07 06:05:25.162 [INFO][6160] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" HandleID="k8s-pod-network.b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" Workload="localhost-k8s-goldmane--58fd7646b9--qv4x5-eth0" Jul 7 06:05:25.166993 containerd[1541]: 2025-07-07 06:05:25.163 [INFO][6160] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:25.166993 containerd[1541]: 2025-07-07 06:05:25.165 [INFO][6151] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" Jul 7 06:05:25.167600 containerd[1541]: time="2025-07-07T06:05:25.167031562Z" level=info msg="TearDown network for sandbox \"b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0\" successfully" Jul 7 06:05:25.167600 containerd[1541]: time="2025-07-07T06:05:25.167059123Z" level=info msg="StopPodSandbox for \"b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0\" returns successfully" Jul 7 06:05:25.167600 containerd[1541]: time="2025-07-07T06:05:25.167464986Z" level=info msg="RemovePodSandbox for \"b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0\"" Jul 7 06:05:25.167600 containerd[1541]: time="2025-07-07T06:05:25.167497187Z" level=info msg="Forcibly stopping sandbox \"b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0\"" Jul 7 06:05:25.231583 containerd[1541]: 2025-07-07 06:05:25.200 [WARNING][6178] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--qv4x5-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"3711a9de-1f07-4eac-8a8a-7a0a57ec740f", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b0d301724555424347b0f2a3ce8c6b4f3144286a5245406c359429e322750344", Pod:"goldmane-58fd7646b9-qv4x5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia36527f8ac4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:25.231583 containerd[1541]: 2025-07-07 06:05:25.200 [INFO][6178] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" Jul 7 06:05:25.231583 containerd[1541]: 2025-07-07 06:05:25.200 [INFO][6178] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" iface="eth0" netns="" Jul 7 06:05:25.231583 containerd[1541]: 2025-07-07 06:05:25.200 [INFO][6178] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" Jul 7 06:05:25.231583 containerd[1541]: 2025-07-07 06:05:25.201 [INFO][6178] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" Jul 7 06:05:25.231583 containerd[1541]: 2025-07-07 06:05:25.219 [INFO][6187] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" HandleID="k8s-pod-network.b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" Workload="localhost-k8s-goldmane--58fd7646b9--qv4x5-eth0" Jul 7 06:05:25.231583 containerd[1541]: 2025-07-07 06:05:25.219 [INFO][6187] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:25.231583 containerd[1541]: 2025-07-07 06:05:25.219 [INFO][6187] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:25.231583 containerd[1541]: 2025-07-07 06:05:25.226 [WARNING][6187] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" HandleID="k8s-pod-network.b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" Workload="localhost-k8s-goldmane--58fd7646b9--qv4x5-eth0" Jul 7 06:05:25.231583 containerd[1541]: 2025-07-07 06:05:25.226 [INFO][6187] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" HandleID="k8s-pod-network.b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" Workload="localhost-k8s-goldmane--58fd7646b9--qv4x5-eth0" Jul 7 06:05:25.231583 containerd[1541]: 2025-07-07 06:05:25.228 [INFO][6187] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:25.231583 containerd[1541]: 2025-07-07 06:05:25.229 [INFO][6178] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0" Jul 7 06:05:25.232202 containerd[1541]: time="2025-07-07T06:05:25.231610655Z" level=info msg="TearDown network for sandbox \"b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0\" successfully" Jul 7 06:05:25.234536 containerd[1541]: time="2025-07-07T06:05:25.234503855Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:05:25.234617 containerd[1541]: time="2025-07-07T06:05:25.234586460Z" level=info msg="RemovePodSandbox \"b48801d88b13d2198670d3e64856bdb68cd9b53c954b64998334ce92106d5cd0\" returns successfully" Jul 7 06:05:25.235034 containerd[1541]: time="2025-07-07T06:05:25.235006003Z" level=info msg="StopPodSandbox for \"33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1\"" Jul 7 06:05:25.296801 containerd[1541]: 2025-07-07 06:05:25.265 [WARNING][6204] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--pr8nc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7bb09144-47e0-49e9-802e-bdaaa9d250da", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"45a8cdc4a042f46fb85a4f00ba831831b8b8a6e061b9c2975ae585b51e55bc48", Pod:"coredns-7c65d6cfc9-pr8nc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6553dcd7de4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:25.296801 containerd[1541]: 2025-07-07 06:05:25.265 [INFO][6204] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" Jul 7 06:05:25.296801 containerd[1541]: 2025-07-07 06:05:25.265 [INFO][6204] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" iface="eth0" netns="" Jul 7 06:05:25.296801 containerd[1541]: 2025-07-07 06:05:25.265 [INFO][6204] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" Jul 7 06:05:25.296801 containerd[1541]: 2025-07-07 06:05:25.265 [INFO][6204] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" Jul 7 06:05:25.296801 containerd[1541]: 2025-07-07 06:05:25.283 [INFO][6213] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" HandleID="k8s-pod-network.33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" Workload="localhost-k8s-coredns--7c65d6cfc9--pr8nc-eth0" Jul 7 06:05:25.296801 containerd[1541]: 2025-07-07 06:05:25.283 [INFO][6213] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:25.296801 containerd[1541]: 2025-07-07 06:05:25.283 [INFO][6213] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:25.296801 containerd[1541]: 2025-07-07 06:05:25.291 [WARNING][6213] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" HandleID="k8s-pod-network.33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" Workload="localhost-k8s-coredns--7c65d6cfc9--pr8nc-eth0" Jul 7 06:05:25.296801 containerd[1541]: 2025-07-07 06:05:25.291 [INFO][6213] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" HandleID="k8s-pod-network.33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" Workload="localhost-k8s-coredns--7c65d6cfc9--pr8nc-eth0" Jul 7 06:05:25.296801 containerd[1541]: 2025-07-07 06:05:25.293 [INFO][6213] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:25.296801 containerd[1541]: 2025-07-07 06:05:25.294 [INFO][6204] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" Jul 7 06:05:25.297255 containerd[1541]: time="2025-07-07T06:05:25.296840385Z" level=info msg="TearDown network for sandbox \"33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1\" successfully" Jul 7 06:05:25.297255 containerd[1541]: time="2025-07-07T06:05:25.296864426Z" level=info msg="StopPodSandbox for \"33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1\" returns successfully" Jul 7 06:05:25.297923 containerd[1541]: time="2025-07-07T06:05:25.297677351Z" level=info msg="RemovePodSandbox for \"33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1\"" Jul 7 06:05:25.297923 containerd[1541]: time="2025-07-07T06:05:25.297707793Z" level=info msg="Forcibly stopping sandbox \"33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1\"" Jul 7 06:05:25.363196 containerd[1541]: 2025-07-07 06:05:25.329 [WARNING][6231] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--pr8nc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7bb09144-47e0-49e9-802e-bdaaa9d250da", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 4, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"45a8cdc4a042f46fb85a4f00ba831831b8b8a6e061b9c2975ae585b51e55bc48", Pod:"coredns-7c65d6cfc9-pr8nc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6553dcd7de4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:05:25.363196 containerd[1541]: 2025-07-07 06:05:25.330 [INFO][6231] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" Jul 7 06:05:25.363196 containerd[1541]: 2025-07-07 06:05:25.330 [INFO][6231] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" iface="eth0" netns="" Jul 7 06:05:25.363196 containerd[1541]: 2025-07-07 06:05:25.330 [INFO][6231] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" Jul 7 06:05:25.363196 containerd[1541]: 2025-07-07 06:05:25.330 [INFO][6231] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" Jul 7 06:05:25.363196 containerd[1541]: 2025-07-07 06:05:25.349 [INFO][6240] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" HandleID="k8s-pod-network.33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" Workload="localhost-k8s-coredns--7c65d6cfc9--pr8nc-eth0" Jul 7 06:05:25.363196 containerd[1541]: 2025-07-07 06:05:25.349 [INFO][6240] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:05:25.363196 containerd[1541]: 2025-07-07 06:05:25.349 [INFO][6240] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:05:25.363196 containerd[1541]: 2025-07-07 06:05:25.358 [WARNING][6240] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" HandleID="k8s-pod-network.33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" Workload="localhost-k8s-coredns--7c65d6cfc9--pr8nc-eth0" Jul 7 06:05:25.363196 containerd[1541]: 2025-07-07 06:05:25.358 [INFO][6240] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" HandleID="k8s-pod-network.33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" Workload="localhost-k8s-coredns--7c65d6cfc9--pr8nc-eth0" Jul 7 06:05:25.363196 containerd[1541]: 2025-07-07 06:05:25.359 [INFO][6240] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:05:25.363196 containerd[1541]: 2025-07-07 06:05:25.361 [INFO][6231] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1" Jul 7 06:05:25.363196 containerd[1541]: time="2025-07-07T06:05:25.363142534Z" level=info msg="TearDown network for sandbox \"33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1\" successfully" Jul 7 06:05:25.366485 containerd[1541]: time="2025-07-07T06:05:25.366408795Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:05:25.366567 containerd[1541]: time="2025-07-07T06:05:25.366514041Z" level=info msg="RemovePodSandbox \"33d9330fed8da483081e67afcb63a94d405e2f3902d16a5533e043bdf78489f1\" returns successfully" Jul 7 06:05:26.566181 systemd[1]: Started sshd@13-10.0.0.84:22-10.0.0.1:36504.service - OpenSSH per-connection server daemon (10.0.0.1:36504). Jul 7 06:05:26.609859 sshd[6269]: Accepted publickey for core from 10.0.0.1 port 36504 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:05:26.611369 sshd[6269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:26.615376 systemd-logind[1522]: New session 14 of user core. Jul 7 06:05:26.624181 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 06:05:26.867663 sshd[6269]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:26.871287 systemd[1]: sshd@13-10.0.0.84:22-10.0.0.1:36504.service: Deactivated successfully. Jul 7 06:05:26.873741 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 06:05:26.875388 systemd-logind[1522]: Session 14 logged out. Waiting for processes to exit. Jul 7 06:05:26.876255 systemd-logind[1522]: Removed session 14. Jul 7 06:05:31.879201 systemd[1]: Started sshd@14-10.0.0.84:22-10.0.0.1:36508.service - OpenSSH per-connection server daemon (10.0.0.1:36508). Jul 7 06:05:31.922933 sshd[6313]: Accepted publickey for core from 10.0.0.1 port 36508 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:05:31.923944 sshd[6313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:31.928989 systemd-logind[1522]: New session 15 of user core. Jul 7 06:05:31.934357 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 06:05:32.317575 sshd[6313]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:32.329218 systemd[1]: Started sshd@15-10.0.0.84:22-10.0.0.1:36524.service - OpenSSH per-connection server daemon (10.0.0.1:36524). Jul 7 06:05:32.329671 systemd[1]: sshd@14-10.0.0.84:22-10.0.0.1:36508.service: Deactivated successfully. Jul 7 06:05:32.331479 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 06:05:32.332751 systemd-logind[1522]: Session 15 logged out. Waiting for processes to exit. Jul 7 06:05:32.333798 systemd-logind[1522]: Removed session 15. Jul 7 06:05:32.362843 sshd[6326]: Accepted publickey for core from 10.0.0.1 port 36524 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:05:32.363974 sshd[6326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:32.368765 systemd-logind[1522]: New session 16 of user core. Jul 7 06:05:32.373126 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 06:05:32.587621 sshd[6326]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:32.590496 systemd[1]: Started sshd@16-10.0.0.84:22-10.0.0.1:56120.service - OpenSSH per-connection server daemon (10.0.0.1:56120). Jul 7 06:05:32.594015 systemd[1]: sshd@15-10.0.0.84:22-10.0.0.1:36524.service: Deactivated successfully. Jul 7 06:05:32.597410 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 06:05:32.599138 systemd-logind[1522]: Session 16 logged out. Waiting for processes to exit. Jul 7 06:05:32.600596 systemd-logind[1522]: Removed session 16. Jul 7 06:05:32.637659 sshd[6353]: Accepted publickey for core from 10.0.0.1 port 56120 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:05:32.639048 sshd[6353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:32.642811 systemd-logind[1522]: New session 17 of user core. Jul 7 06:05:32.652199 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 06:05:34.217066 kubelet[2634]: E0707 06:05:34.216683 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:34.482456 sshd[6353]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:34.495269 systemd[1]: Started sshd@17-10.0.0.84:22-10.0.0.1:56122.service - OpenSSH per-connection server daemon (10.0.0.1:56122). Jul 7 06:05:34.495653 systemd[1]: sshd@16-10.0.0.84:22-10.0.0.1:56120.service: Deactivated successfully. Jul 7 06:05:34.499157 systemd-logind[1522]: Session 17 logged out. Waiting for processes to exit. Jul 7 06:05:34.499779 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 06:05:34.503585 systemd-logind[1522]: Removed session 17. Jul 7 06:05:34.534731 sshd[6377]: Accepted publickey for core from 10.0.0.1 port 56122 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:05:34.536388 sshd[6377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:34.540321 systemd-logind[1522]: New session 18 of user core. Jul 7 06:05:34.553315 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 06:05:35.129819 sshd[6377]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:35.146419 systemd[1]: Started sshd@18-10.0.0.84:22-10.0.0.1:56126.service - OpenSSH per-connection server daemon (10.0.0.1:56126). Jul 7 06:05:35.147252 systemd[1]: sshd@17-10.0.0.84:22-10.0.0.1:56122.service: Deactivated successfully. Jul 7 06:05:35.149317 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 06:05:35.150846 systemd-logind[1522]: Session 18 logged out. Waiting for processes to exit. Jul 7 06:05:35.152178 systemd-logind[1522]: Removed session 18. Jul 7 06:05:35.197260 sshd[6391]: Accepted publickey for core from 10.0.0.1 port 56126 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:05:35.199130 sshd[6391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:35.205092 systemd-logind[1522]: New session 19 of user core. Jul 7 06:05:35.213143 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 06:05:35.374662 sshd[6391]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:35.377562 systemd[1]: sshd@18-10.0.0.84:22-10.0.0.1:56126.service: Deactivated successfully. Jul 7 06:05:35.379808 systemd-logind[1522]: Session 19 logged out. Waiting for processes to exit. Jul 7 06:05:35.380088 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 06:05:35.384400 systemd-logind[1522]: Removed session 19. Jul 7 06:05:35.429966 systemd-journald[1148]: Under memory pressure, flushing caches. Jul 7 06:05:35.429048 systemd-resolved[1434]: Under memory pressure, flushing caches. Jul 7 06:05:35.429085 systemd-resolved[1434]: Flushed all caches. Jul 7 06:05:38.216536 kubelet[2634]: E0707 06:05:38.216485 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:40.387360 systemd[1]: Started sshd@19-10.0.0.84:22-10.0.0.1:56130.service - OpenSSH per-connection server daemon (10.0.0.1:56130). Jul 7 06:05:40.420857 sshd[6413]: Accepted publickey for core from 10.0.0.1 port 56130 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:05:40.422206 sshd[6413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:40.426538 systemd-logind[1522]: New session 20 of user core. Jul 7 06:05:40.434213 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 06:05:40.551053 sshd[6413]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:40.554486 systemd[1]: sshd@19-10.0.0.84:22-10.0.0.1:56130.service: Deactivated successfully. Jul 7 06:05:40.556880 systemd-logind[1522]: Session 20 logged out. Waiting for processes to exit. Jul 7 06:05:40.557351 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 06:05:40.558363 systemd-logind[1522]: Removed session 20. Jul 7 06:05:44.214256 kubelet[2634]: E0707 06:05:44.214014 2634 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:45.569324 systemd[1]: Started sshd@20-10.0.0.84:22-10.0.0.1:42950.service - OpenSSH per-connection server daemon (10.0.0.1:42950). Jul 7 06:05:45.603437 sshd[6434]: Accepted publickey for core from 10.0.0.1 port 42950 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:05:45.604777 sshd[6434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:45.609999 systemd-logind[1522]: New session 21 of user core. Jul 7 06:05:45.617280 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 06:05:45.736792 sshd[6434]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:45.745275 systemd-logind[1522]: Session 21 logged out. Waiting for processes to exit. Jul 7 06:05:45.746831 systemd[1]: sshd@20-10.0.0.84:22-10.0.0.1:42950.service: Deactivated successfully. Jul 7 06:05:45.749143 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 06:05:45.750970 systemd-logind[1522]: Removed session 21. Jul 7 06:05:50.745149 systemd[1]: Started sshd@21-10.0.0.84:22-10.0.0.1:42956.service - OpenSSH per-connection server daemon (10.0.0.1:42956). Jul 7 06:05:50.780429 sshd[6451]: Accepted publickey for core from 10.0.0.1 port 42956 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:05:50.782028 sshd[6451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:50.790655 systemd-logind[1522]: New session 22 of user core. Jul 7 06:05:50.797173 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 06:05:50.943500 sshd[6451]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:50.947883 systemd[1]: sshd@21-10.0.0.84:22-10.0.0.1:42956.service: Deactivated successfully. Jul 7 06:05:50.950289 systemd-logind[1522]: Session 22 logged out. Waiting for processes to exit. Jul 7 06:05:50.951176 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 06:05:50.952243 systemd-logind[1522]: Removed session 22.