Jul 7 06:04:57.924515 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 7 06:04:57.924538 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Sun Jul 6 22:28:26 -00 2025 Jul 7 06:04:57.924549 kernel: KASLR enabled Jul 7 06:04:57.924562 kernel: efi: EFI v2.7 by EDK II Jul 7 06:04:57.924568 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 7 06:04:57.924574 kernel: random: crng init done Jul 7 06:04:57.924581 kernel: ACPI: Early table checksum verification disabled Jul 7 06:04:57.924587 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 7 06:04:57.924594 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 7 06:04:57.924602 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:04:57.924609 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:04:57.924615 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:04:57.924622 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:04:57.924628 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:04:57.924636 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:04:57.924644 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:04:57.924651 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:04:57.924658 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:04:57.924665 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 7 06:04:57.924671 kernel: NUMA: Failed to initialise from firmware Jul 7 06:04:57.924678 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 7 06:04:57.924685 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Jul 7 06:04:57.924691 kernel: Zone ranges: Jul 7 06:04:57.924698 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 7 06:04:57.924704 kernel: DMA32 empty Jul 7 06:04:57.924712 kernel: Normal empty Jul 7 06:04:57.924718 kernel: Movable zone start for each node Jul 7 06:04:57.924724 kernel: Early memory node ranges Jul 7 06:04:57.924733 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 7 06:04:57.924743 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 7 06:04:57.924751 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 7 06:04:57.924757 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 7 06:04:57.924764 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 7 06:04:57.924771 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 7 06:04:57.924777 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 7 06:04:57.924784 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 7 06:04:57.924790 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 7 06:04:57.924798 kernel: psci: probing for conduit method from ACPI. Jul 7 06:04:57.924805 kernel: psci: PSCIv1.1 detected in firmware. Jul 7 06:04:57.924812 kernel: psci: Using standard PSCI v0.2 function IDs Jul 7 06:04:57.924821 kernel: psci: Trusted OS migration not required Jul 7 06:04:57.924828 kernel: psci: SMC Calling Convention v1.1 Jul 7 06:04:57.924835 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 7 06:04:57.924844 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 7 06:04:57.924851 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 7 06:04:57.924858 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 7 06:04:57.924865 kernel: Detected PIPT I-cache on CPU0 Jul 7 06:04:57.924872 kernel: CPU features: detected: GIC system register CPU interface Jul 7 06:04:57.924879 kernel: CPU features: detected: Hardware dirty bit management Jul 7 06:04:57.924887 kernel: CPU features: detected: Spectre-v4 Jul 7 06:04:57.924893 kernel: CPU features: detected: Spectre-BHB Jul 7 06:04:57.924900 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 7 06:04:57.924907 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 7 06:04:57.924916 kernel: CPU features: detected: ARM erratum 1418040 Jul 7 06:04:57.924923 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 7 06:04:57.924930 kernel: alternatives: applying boot alternatives Jul 7 06:04:57.924938 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 06:04:57.924945 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 06:04:57.924952 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 06:04:57.924959 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 06:04:57.924966 kernel: Fallback order for Node 0: 0 Jul 7 06:04:57.924973 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 7 06:04:57.924980 kernel: Policy zone: DMA Jul 7 06:04:57.924987 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 06:04:57.924995 kernel: software IO TLB: area num 4. Jul 7 06:04:57.925002 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 7 06:04:57.925010 kernel: Memory: 2386400K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185888K reserved, 0K cma-reserved) Jul 7 06:04:57.925017 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 7 06:04:57.925024 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 06:04:57.925032 kernel: rcu: RCU event tracing is enabled. Jul 7 06:04:57.925039 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 7 06:04:57.925046 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 06:04:57.925053 kernel: Tracing variant of Tasks RCU enabled. Jul 7 06:04:57.925060 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 06:04:57.925067 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 7 06:04:57.925074 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 7 06:04:57.925083 kernel: GICv3: 256 SPIs implemented Jul 7 06:04:57.925090 kernel: GICv3: 0 Extended SPIs implemented Jul 7 06:04:57.925097 kernel: Root IRQ handler: gic_handle_irq Jul 7 06:04:57.925103 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 7 06:04:57.925110 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 7 06:04:57.925117 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 7 06:04:57.925124 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 7 06:04:57.925131 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 7 06:04:57.925138 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 7 06:04:57.925145 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 7 06:04:57.925152 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 06:04:57.925160 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 06:04:57.925167 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 7 06:04:57.925174 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 7 06:04:57.925182 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 7 06:04:57.925189 kernel: arm-pv: using stolen time PV Jul 7 06:04:57.925196 kernel: Console: colour dummy device 80x25 Jul 7 06:04:57.925203 kernel: ACPI: Core revision 20230628 Jul 7 06:04:57.925210 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 7 06:04:57.925218 kernel: pid_max: default: 32768 minimum: 301 Jul 7 06:04:57.925225 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 7 06:04:57.925234 kernel: landlock: Up and running. Jul 7 06:04:57.925241 kernel: SELinux: Initializing. Jul 7 06:04:57.925248 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 06:04:57.925255 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 06:04:57.925262 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 06:04:57.925269 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 06:04:57.925277 kernel: rcu: Hierarchical SRCU implementation. Jul 7 06:04:57.925284 kernel: rcu: Max phase no-delay instances is 400. Jul 7 06:04:57.925291 kernel: Platform MSI: ITS@0x8080000 domain created Jul 7 06:04:57.925299 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 7 06:04:57.925307 kernel: Remapping and enabling EFI services. Jul 7 06:04:57.925314 kernel: smp: Bringing up secondary CPUs ... Jul 7 06:04:57.925331 kernel: Detected PIPT I-cache on CPU1 Jul 7 06:04:57.925339 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 7 06:04:57.925346 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 7 06:04:57.925353 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 06:04:57.925360 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 7 06:04:57.925368 kernel: Detected PIPT I-cache on CPU2 Jul 7 06:04:57.925375 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 7 06:04:57.925384 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 7 06:04:57.925391 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 06:04:57.925403 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 7 06:04:57.925412 kernel: Detected PIPT I-cache on CPU3 Jul 7 06:04:57.925420 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 7 06:04:57.925427 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 7 06:04:57.925435 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 06:04:57.925442 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 7 06:04:57.925450 kernel: smp: Brought up 1 node, 4 CPUs Jul 7 06:04:57.925459 kernel: SMP: Total of 4 processors activated. Jul 7 06:04:57.925466 kernel: CPU features: detected: 32-bit EL0 Support Jul 7 06:04:57.925474 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 7 06:04:57.925481 kernel: CPU features: detected: Common not Private translations Jul 7 06:04:57.925489 kernel: CPU features: detected: CRC32 instructions Jul 7 06:04:57.925496 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 7 06:04:57.925504 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 7 06:04:57.925512 kernel: CPU features: detected: LSE atomic instructions Jul 7 06:04:57.925520 kernel: CPU features: detected: Privileged Access Never Jul 7 06:04:57.925528 kernel: CPU features: detected: RAS Extension Support Jul 7 06:04:57.925536 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 7 06:04:57.925543 kernel: CPU: All CPU(s) started at EL1 Jul 7 06:04:57.925551 kernel: alternatives: applying system-wide alternatives Jul 7 06:04:57.925563 kernel: devtmpfs: initialized Jul 7 06:04:57.925571 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 06:04:57.925579 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 7 06:04:57.925586 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 06:04:57.925596 kernel: SMBIOS 3.0.0 present. Jul 7 06:04:57.925604 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 7 06:04:57.925611 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 06:04:57.925632 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 7 06:04:57.925640 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 7 06:04:57.925648 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 7 06:04:57.925656 kernel: audit: initializing netlink subsys (disabled) Jul 7 06:04:57.925663 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Jul 7 06:04:57.925671 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 06:04:57.925681 kernel: cpuidle: using governor menu Jul 7 06:04:57.925688 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 7 06:04:57.925696 kernel: ASID allocator initialised with 32768 entries Jul 7 06:04:57.925703 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 06:04:57.925711 kernel: Serial: AMBA PL011 UART driver Jul 7 06:04:57.925718 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 7 06:04:57.925726 kernel: Modules: 0 pages in range for non-PLT usage Jul 7 06:04:57.925733 kernel: Modules: 509008 pages in range for PLT usage Jul 7 06:04:57.925742 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 06:04:57.925751 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 06:04:57.925759 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 7 06:04:57.925767 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 7 06:04:57.925774 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 06:04:57.925782 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 06:04:57.925789 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 7 06:04:57.925797 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 7 06:04:57.925804 kernel: ACPI: Added _OSI(Module Device) Jul 7 06:04:57.925812 kernel: ACPI: Added _OSI(Processor Device) Jul 7 06:04:57.925820 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 06:04:57.925828 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 06:04:57.925835 kernel: ACPI: Interpreter enabled Jul 7 06:04:57.925843 kernel: ACPI: Using GIC for interrupt routing Jul 7 06:04:57.925850 kernel: ACPI: MCFG table detected, 1 entries Jul 7 06:04:57.925858 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 7 06:04:57.925865 kernel: printk: console [ttyAMA0] enabled Jul 7 06:04:57.925873 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 06:04:57.926023 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 06:04:57.926103 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 7 06:04:57.926172 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 7 06:04:57.926247 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 7 06:04:57.926314 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 7 06:04:57.926347 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 7 06:04:57.926355 kernel: PCI host bridge to bus 0000:00 Jul 7 06:04:57.926439 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 7 06:04:57.926505 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 7 06:04:57.926578 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 7 06:04:57.926643 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 06:04:57.926727 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 7 06:04:57.926808 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 7 06:04:57.926880 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 7 06:04:57.926952 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 7 06:04:57.927021 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 7 06:04:57.927090 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 7 06:04:57.927159 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 7 06:04:57.927228 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 7 06:04:57.927291 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 7 06:04:57.927365 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 7 06:04:57.927441 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 7 06:04:57.927452 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 7 06:04:57.927460 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 7 06:04:57.927467 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 7 06:04:57.927475 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 7 06:04:57.927482 kernel: iommu: Default domain type: Translated Jul 7 06:04:57.927490 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 7 06:04:57.927497 kernel: efivars: Registered efivars operations Jul 7 06:04:57.927507 kernel: vgaarb: loaded Jul 7 06:04:57.927515 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 7 06:04:57.927522 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 06:04:57.927530 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 06:04:57.927537 kernel: pnp: PnP ACPI init Jul 7 06:04:57.927624 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 7 06:04:57.927636 kernel: pnp: PnP ACPI: found 1 devices Jul 7 06:04:57.927644 kernel: NET: Registered PF_INET protocol family Jul 7 06:04:57.927655 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 06:04:57.927670 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 06:04:57.927677 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 06:04:57.927685 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 06:04:57.927693 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 06:04:57.927701 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 06:04:57.927709 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 06:04:57.927717 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 06:04:57.927724 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 06:04:57.927734 kernel: PCI: CLS 0 bytes, default 64 Jul 7 06:04:57.927741 kernel: kvm [1]: HYP mode not available Jul 7 06:04:57.927749 kernel: Initialise system trusted keyrings Jul 7 06:04:57.927756 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 06:04:57.927764 kernel: Key type asymmetric registered Jul 7 06:04:57.927771 kernel: Asymmetric key parser 'x509' registered Jul 7 06:04:57.927779 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 06:04:57.927786 kernel: io scheduler mq-deadline registered Jul 7 06:04:57.927794 kernel: io scheduler kyber registered Jul 7 06:04:57.927801 kernel: io scheduler bfq registered Jul 7 06:04:57.927810 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 7 06:04:57.927818 kernel: ACPI: button: Power Button [PWRB] Jul 7 06:04:57.927826 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 7 06:04:57.927900 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 7 06:04:57.927911 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 06:04:57.927918 kernel: thunder_xcv, ver 1.0 Jul 7 06:04:57.927926 kernel: thunder_bgx, ver 1.0 Jul 7 06:04:57.927933 kernel: nicpf, ver 1.0 Jul 7 06:04:57.927941 kernel: nicvf, ver 1.0 Jul 7 06:04:57.928024 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 7 06:04:57.928090 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-07T06:04:57 UTC (1751868297) Jul 7 06:04:57.928100 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 06:04:57.928108 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 7 06:04:57.928116 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 7 06:04:57.928123 kernel: watchdog: Hard watchdog permanently disabled Jul 7 06:04:57.928131 kernel: NET: Registered PF_INET6 protocol family Jul 7 06:04:57.928139 kernel: Segment Routing with IPv6 Jul 7 06:04:57.928148 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 06:04:57.928156 kernel: NET: Registered PF_PACKET protocol family Jul 7 06:04:57.928163 kernel: Key type dns_resolver registered Jul 7 06:04:57.928171 kernel: registered taskstats version 1 Jul 7 06:04:57.928179 kernel: Loading compiled-in X.509 certificates Jul 7 06:04:57.928186 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 238b9dc1e5bb098e9decff566778e6505241ab94' Jul 7 06:04:57.928194 kernel: Key type .fscrypt registered Jul 7 06:04:57.928201 kernel: Key type fscrypt-provisioning registered Jul 7 06:04:57.928209 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 06:04:57.928217 kernel: ima: Allocated hash algorithm: sha1 Jul 7 06:04:57.928225 kernel: ima: No architecture policies found Jul 7 06:04:57.928232 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 7 06:04:57.928240 kernel: clk: Disabling unused clocks Jul 7 06:04:57.928248 kernel: Freeing unused kernel memory: 39424K Jul 7 06:04:57.928255 kernel: Run /init as init process Jul 7 06:04:57.928263 kernel: with arguments: Jul 7 06:04:57.928270 kernel: /init Jul 7 06:04:57.928277 kernel: with environment: Jul 7 06:04:57.928287 kernel: HOME=/ Jul 7 06:04:57.928294 kernel: TERM=linux Jul 7 06:04:57.928302 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 06:04:57.928311 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 06:04:57.928359 systemd[1]: Detected virtualization kvm. Jul 7 06:04:57.928368 systemd[1]: Detected architecture arm64. Jul 7 06:04:57.928376 systemd[1]: Running in initrd. Jul 7 06:04:57.928386 systemd[1]: No hostname configured, using default hostname. Jul 7 06:04:57.928394 systemd[1]: Hostname set to . Jul 7 06:04:57.928403 systemd[1]: Initializing machine ID from VM UUID. Jul 7 06:04:57.928411 systemd[1]: Queued start job for default target initrd.target. Jul 7 06:04:57.928419 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:04:57.928427 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:04:57.928436 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 06:04:57.928444 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:04:57.928454 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 06:04:57.928462 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 06:04:57.928472 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 06:04:57.928480 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 06:04:57.928489 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:04:57.928497 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:04:57.928505 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:04:57.928514 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:04:57.928522 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:04:57.928530 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:04:57.928539 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:04:57.928547 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:04:57.928561 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 06:04:57.928570 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 06:04:57.928578 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:04:57.928586 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:04:57.928596 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:04:57.928605 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:04:57.928613 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 06:04:57.928624 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:04:57.928632 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 06:04:57.928642 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 06:04:57.928651 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:04:57.928662 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:04:57.928671 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:04:57.928680 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 06:04:57.928688 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:04:57.928696 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 06:04:57.928705 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:04:57.928715 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:04:57.928723 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:04:57.928753 systemd-journald[237]: Collecting audit messages is disabled. Jul 7 06:04:57.928773 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:04:57.928784 systemd-journald[237]: Journal started Jul 7 06:04:57.928803 systemd-journald[237]: Runtime Journal (/run/log/journal/1b81241a373c422d9d4c1be8983763f8) is 5.9M, max 47.3M, 41.4M free. Jul 7 06:04:57.913240 systemd-modules-load[238]: Inserted module 'overlay' Jul 7 06:04:57.932351 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 06:04:57.936345 kernel: Bridge firewalling registered Jul 7 06:04:57.936197 systemd-modules-load[238]: Inserted module 'br_netfilter' Jul 7 06:04:57.944511 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:04:57.944548 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:04:57.946701 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:04:57.948899 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:04:57.950379 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:04:57.966531 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 06:04:57.968401 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:04:57.970478 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:04:57.980447 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:04:57.982142 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:04:57.984745 dracut-cmdline[269]: dracut-dracut-053 Jul 7 06:04:57.984745 dracut-cmdline[269]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 06:04:57.992570 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:04:58.020141 systemd-resolved[291]: Positive Trust Anchors: Jul 7 06:04:58.020158 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:04:58.020191 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:04:58.027527 systemd-resolved[291]: Defaulting to hostname 'linux'. Jul 7 06:04:58.035748 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:04:58.036947 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:04:58.060349 kernel: SCSI subsystem initialized Jul 7 06:04:58.065332 kernel: Loading iSCSI transport class v2.0-870. Jul 7 06:04:58.076358 kernel: iscsi: registered transport (tcp) Jul 7 06:04:58.089343 kernel: iscsi: registered transport (qla4xxx) Jul 7 06:04:58.089366 kernel: QLogic iSCSI HBA Driver Jul 7 06:04:58.134990 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 06:04:58.146490 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 06:04:58.163340 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 06:04:58.163379 kernel: device-mapper: uevent: version 1.0.3 Jul 7 06:04:58.165343 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 7 06:04:58.212367 kernel: raid6: neonx8 gen() 15566 MB/s Jul 7 06:04:58.229347 kernel: raid6: neonx4 gen() 15666 MB/s Jul 7 06:04:58.246340 kernel: raid6: neonx2 gen() 13275 MB/s Jul 7 06:04:58.263343 kernel: raid6: neonx1 gen() 10442 MB/s Jul 7 06:04:58.280342 kernel: raid6: int64x8 gen() 6628 MB/s Jul 7 06:04:58.297340 kernel: raid6: int64x4 gen() 7343 MB/s Jul 7 06:04:58.314339 kernel: raid6: int64x2 gen() 6123 MB/s Jul 7 06:04:58.331449 kernel: raid6: int64x1 gen() 5052 MB/s Jul 7 06:04:58.331479 kernel: raid6: using algorithm neonx4 gen() 15666 MB/s Jul 7 06:04:58.349434 kernel: raid6: .... xor() 12297 MB/s, rmw enabled Jul 7 06:04:58.349469 kernel: raid6: using neon recovery algorithm Jul 7 06:04:58.354342 kernel: xor: measuring software checksum speed Jul 7 06:04:58.355581 kernel: 8regs : 17684 MB/sec Jul 7 06:04:58.355593 kernel: 32regs : 19664 MB/sec Jul 7 06:04:58.356829 kernel: arm64_neon : 26998 MB/sec Jul 7 06:04:58.356841 kernel: xor: using function: arm64_neon (26998 MB/sec) Jul 7 06:04:58.406346 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 06:04:58.416926 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:04:58.424494 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:04:58.435601 systemd-udevd[463]: Using default interface naming scheme 'v255'. Jul 7 06:04:58.438664 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:04:58.451808 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 06:04:58.463026 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Jul 7 06:04:58.489891 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:04:58.495480 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:04:58.535587 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:04:58.541487 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 06:04:58.556358 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 06:04:58.558150 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:04:58.560152 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:04:58.562898 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:04:58.574196 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 06:04:58.578232 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 7 06:04:58.578459 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 7 06:04:58.584694 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 06:04:58.584732 kernel: GPT:9289727 != 19775487 Jul 7 06:04:58.584749 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 06:04:58.585686 kernel: GPT:9289727 != 19775487 Jul 7 06:04:58.585628 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:04:58.589684 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 06:04:58.589711 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:04:58.592088 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 06:04:58.592245 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:04:58.601039 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:04:58.602274 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:04:58.602425 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:04:58.605082 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:04:58.611750 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:04:58.615339 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (514) Jul 7 06:04:58.619364 kernel: BTRFS: device fsid 8b9ce65a-b4d6-4744-987c-133e7f159d2d devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (513) Jul 7 06:04:58.626353 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:04:58.631320 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 7 06:04:58.638824 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 7 06:04:58.643513 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 06:04:58.647492 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 7 06:04:58.648688 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 7 06:04:58.666451 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 06:04:58.668232 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:04:58.673952 disk-uuid[551]: Primary Header is updated. Jul 7 06:04:58.673952 disk-uuid[551]: Secondary Entries is updated. Jul 7 06:04:58.673952 disk-uuid[551]: Secondary Header is updated. Jul 7 06:04:58.677349 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:04:58.689896 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:04:59.691354 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:04:59.691625 disk-uuid[553]: The operation has completed successfully. Jul 7 06:04:59.714699 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 06:04:59.714810 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 06:04:59.736482 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 06:04:59.740077 sh[574]: Success Jul 7 06:04:59.756337 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 7 06:04:59.783629 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 06:04:59.790570 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 06:04:59.792583 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 06:04:59.802449 kernel: BTRFS info (device dm-0): first mount of filesystem 8b9ce65a-b4d6-4744-987c-133e7f159d2d Jul 7 06:04:59.802481 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 7 06:04:59.802498 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 7 06:04:59.803523 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 7 06:04:59.804916 kernel: BTRFS info (device dm-0): using free space tree Jul 7 06:04:59.808798 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 06:04:59.810082 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 06:04:59.818450 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 06:04:59.819915 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 06:04:59.828427 kernel: BTRFS info (device vda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:04:59.828469 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 06:04:59.828480 kernel: BTRFS info (device vda6): using free space tree Jul 7 06:04:59.831350 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 06:04:59.838609 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 7 06:04:59.840628 kernel: BTRFS info (device vda6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:04:59.845467 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 06:04:59.852520 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 06:04:59.920735 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:04:59.930450 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:04:59.948404 ignition[668]: Ignition 2.19.0 Jul 7 06:04:59.948424 ignition[668]: Stage: fetch-offline Jul 7 06:04:59.948458 ignition[668]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:04:59.948465 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:04:59.948623 ignition[668]: parsed url from cmdline: "" Jul 7 06:04:59.948626 ignition[668]: no config URL provided Jul 7 06:04:59.948631 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 06:04:59.948638 ignition[668]: no config at "/usr/lib/ignition/user.ign" Jul 7 06:04:59.954241 systemd-networkd[766]: lo: Link UP Jul 7 06:04:59.948659 ignition[668]: op(1): [started] loading QEMU firmware config module Jul 7 06:04:59.954244 systemd-networkd[766]: lo: Gained carrier Jul 7 06:04:59.948672 ignition[668]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 7 06:04:59.954971 systemd-networkd[766]: Enumeration completed Jul 7 06:04:59.954271 ignition[668]: op(1): [finished] loading QEMU firmware config module Jul 7 06:04:59.955075 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:04:59.954290 ignition[668]: QEMU firmware config was not found. Ignoring... Jul 7 06:04:59.955560 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:04:59.955564 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:04:59.956827 systemd-networkd[766]: eth0: Link UP Jul 7 06:04:59.956830 systemd-networkd[766]: eth0: Gained carrier Jul 7 06:04:59.956836 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:04:59.957183 systemd[1]: Reached target network.target - Network. Jul 7 06:04:59.994384 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.91/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 06:05:00.005520 ignition[668]: parsing config with SHA512: 70d9b6fe0170dbf6e2ef05fdd88be5b4e21c38040ba7291122b72544ad043e0e7bd91f069fa75dea98b8a377fa8eef39e2f3f29e420817df08d9064d441f638e Jul 7 06:05:00.009567 unknown[668]: fetched base config from "system" Jul 7 06:05:00.009577 unknown[668]: fetched user config from "qemu" Jul 7 06:05:00.010124 ignition[668]: fetch-offline: fetch-offline passed Jul 7 06:05:00.011335 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:05:00.010208 ignition[668]: Ignition finished successfully Jul 7 06:05:00.013301 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 7 06:05:00.020500 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 06:05:00.030439 ignition[773]: Ignition 2.19.0 Jul 7 06:05:00.030449 ignition[773]: Stage: kargs Jul 7 06:05:00.030612 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:05:00.030621 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:05:00.033472 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 06:05:00.031458 ignition[773]: kargs: kargs passed Jul 7 06:05:00.031504 ignition[773]: Ignition finished successfully Jul 7 06:05:00.047492 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 06:05:00.056858 ignition[781]: Ignition 2.19.0 Jul 7 06:05:00.056867 ignition[781]: Stage: disks Jul 7 06:05:00.057020 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:05:00.057029 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:05:00.057889 ignition[781]: disks: disks passed Jul 7 06:05:00.060240 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 06:05:00.057930 ignition[781]: Ignition finished successfully Jul 7 06:05:00.061703 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 06:05:00.063349 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 06:05:00.065049 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:05:00.066830 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:05:00.068728 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:05:00.084455 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 06:05:00.093495 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 7 06:05:00.097156 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 06:05:00.100774 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 06:05:00.147340 kernel: EXT4-fs (vda9): mounted filesystem bea371b7-1069-4e98-84b2-bf5b94f934f3 r/w with ordered data mode. Quota mode: none. Jul 7 06:05:00.147507 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 06:05:00.148670 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 06:05:00.164414 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:05:00.166070 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 06:05:00.167465 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 06:05:00.167503 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 06:05:00.173765 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (801) Jul 7 06:05:00.167524 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:05:00.174532 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 06:05:00.176460 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 06:05:00.180821 kernel: BTRFS info (device vda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:05:00.180841 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 06:05:00.180852 kernel: BTRFS info (device vda6): using free space tree Jul 7 06:05:00.182359 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 06:05:00.183957 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:05:00.220553 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 06:05:00.224671 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Jul 7 06:05:00.228633 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 06:05:00.231795 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 06:05:00.296863 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 06:05:00.310420 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 06:05:00.311896 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 06:05:00.317350 kernel: BTRFS info (device vda6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:05:00.330282 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 06:05:00.334812 ignition[916]: INFO : Ignition 2.19.0 Jul 7 06:05:00.335726 ignition[916]: INFO : Stage: mount Jul 7 06:05:00.335726 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:05:00.335726 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:05:00.338423 ignition[916]: INFO : mount: mount passed Jul 7 06:05:00.338423 ignition[916]: INFO : Ignition finished successfully Jul 7 06:05:00.338403 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 06:05:00.347417 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 06:05:00.801087 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 06:05:00.820520 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:05:00.827412 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (927) Jul 7 06:05:00.827442 kernel: BTRFS info (device vda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 06:05:00.828468 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 06:05:00.829327 kernel: BTRFS info (device vda6): using free space tree Jul 7 06:05:00.831334 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 06:05:00.832517 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:05:00.847944 ignition[944]: INFO : Ignition 2.19.0 Jul 7 06:05:00.847944 ignition[944]: INFO : Stage: files Jul 7 06:05:00.849577 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:05:00.849577 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:05:00.849577 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Jul 7 06:05:00.852968 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 06:05:00.852968 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 06:05:00.852968 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 06:05:00.852968 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 06:05:00.852968 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 06:05:00.852243 unknown[944]: wrote ssh authorized keys file for user: core Jul 7 06:05:00.860164 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 7 06:05:00.860164 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 7 06:05:00.920130 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 06:05:01.110341 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 7 06:05:01.110341 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 06:05:01.114976 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 06:05:01.114976 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:05:01.114976 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:05:01.114976 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:05:01.114976 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:05:01.114976 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:05:01.114976 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:05:01.114976 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:05:01.114976 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:05:01.114976 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 06:05:01.114976 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 06:05:01.114976 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 06:05:01.114976 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 7 06:05:01.217615 systemd-networkd[766]: eth0: Gained IPv6LL Jul 7 06:05:01.616991 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 06:05:01.861150 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 06:05:01.861150 ignition[944]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 06:05:01.864913 ignition[944]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:05:01.864913 ignition[944]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:05:01.864913 ignition[944]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 06:05:01.864913 ignition[944]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 7 06:05:01.864913 ignition[944]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 06:05:01.864913 ignition[944]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 06:05:01.864913 ignition[944]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 7 06:05:01.864913 ignition[944]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 7 06:05:01.887651 ignition[944]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 06:05:01.891128 ignition[944]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 06:05:01.893446 ignition[944]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 7 06:05:01.893446 ignition[944]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 7 06:05:01.893446 ignition[944]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 06:05:01.893446 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:05:01.893446 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:05:01.893446 ignition[944]: INFO : files: files passed Jul 7 06:05:01.893446 ignition[944]: INFO : Ignition finished successfully Jul 7 06:05:01.894062 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 06:05:01.906479 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 06:05:01.908869 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 06:05:01.910156 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 06:05:01.910234 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 06:05:01.917299 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Jul 7 06:05:01.919560 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:05:01.919560 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:05:01.922623 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:05:01.922060 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:05:01.923942 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 06:05:01.934460 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 06:05:01.954469 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 06:05:01.954581 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 06:05:01.956732 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 06:05:01.957757 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 06:05:01.959761 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 06:05:01.960509 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 06:05:01.976209 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:05:01.978625 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 06:05:01.989412 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:05:01.990610 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:05:01.992569 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 06:05:01.994235 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 06:05:01.994379 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:05:01.997034 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 06:05:01.999014 systemd[1]: Stopped target basic.target - Basic System. Jul 7 06:05:02.000731 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 06:05:02.002431 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:05:02.004392 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 06:05:02.006351 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 06:05:02.008165 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:05:02.010069 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 06:05:02.011984 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 06:05:02.013659 systemd[1]: Stopped target swap.target - Swaps. Jul 7 06:05:02.015181 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 06:05:02.015296 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:05:02.017588 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:05:02.019422 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:05:02.021323 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 06:05:02.021428 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:05:02.023389 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 06:05:02.023497 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 06:05:02.026249 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 06:05:02.026387 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:05:02.028280 systemd[1]: Stopped target paths.target - Path Units. Jul 7 06:05:02.029831 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 06:05:02.033398 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:05:02.034644 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 06:05:02.036644 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 06:05:02.038180 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 06:05:02.038264 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:05:02.039789 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 06:05:02.039867 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:05:02.041333 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 06:05:02.041444 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:05:02.043157 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 06:05:02.043253 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 06:05:02.055508 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 06:05:02.057113 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 06:05:02.058055 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 06:05:02.058184 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:05:02.060147 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 06:05:02.060250 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:05:02.066415 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 06:05:02.066508 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 06:05:02.071260 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 06:05:02.073395 ignition[1001]: INFO : Ignition 2.19.0 Jul 7 06:05:02.073395 ignition[1001]: INFO : Stage: umount Jul 7 06:05:02.073395 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:05:02.073395 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:05:02.073395 ignition[1001]: INFO : umount: umount passed Jul 7 06:05:02.073395 ignition[1001]: INFO : Ignition finished successfully Jul 7 06:05:02.073921 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 06:05:02.074034 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 06:05:02.075485 systemd[1]: Stopped target network.target - Network. Jul 7 06:05:02.076739 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 06:05:02.076796 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 06:05:02.079532 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 06:05:02.079592 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 06:05:02.081507 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 06:05:02.081563 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 06:05:02.083057 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 06:05:02.083101 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 06:05:02.085671 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 06:05:02.087294 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 06:05:02.094371 systemd-networkd[766]: eth0: DHCPv6 lease lost Jul 7 06:05:02.095920 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 06:05:02.096033 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 06:05:02.097665 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 06:05:02.097696 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:05:02.106458 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 06:05:02.107297 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 06:05:02.107384 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:05:02.109489 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:05:02.112787 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 06:05:02.112882 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 06:05:02.116662 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 06:05:02.116749 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:05:02.118238 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 06:05:02.118297 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 06:05:02.120492 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 06:05:02.120552 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:05:02.124002 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 06:05:02.124114 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 06:05:02.128538 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 06:05:02.128690 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:05:02.131118 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 06:05:02.131210 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 06:05:02.133037 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 06:05:02.133095 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 06:05:02.134388 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 06:05:02.134430 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:05:02.136109 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 06:05:02.136161 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:05:02.139058 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 06:05:02.139107 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 06:05:02.141621 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 06:05:02.141669 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:05:02.143845 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 06:05:02.143896 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 06:05:02.151455 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 06:05:02.153500 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 06:05:02.153571 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:05:02.155554 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 7 06:05:02.155603 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:05:02.157500 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 06:05:02.157555 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:05:02.159612 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:05:02.159659 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:05:02.161848 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 06:05:02.163638 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 06:05:02.165195 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 06:05:02.179493 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 06:05:02.185480 systemd[1]: Switching root. Jul 7 06:05:02.215544 systemd-journald[237]: Journal stopped Jul 7 06:05:02.931450 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jul 7 06:05:02.931497 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 06:05:02.931513 kernel: SELinux: policy capability open_perms=1 Jul 7 06:05:02.931523 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 06:05:02.931532 kernel: SELinux: policy capability always_check_network=0 Jul 7 06:05:02.931552 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 06:05:02.931564 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 06:05:02.931573 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 06:05:02.931586 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 06:05:02.931597 kernel: audit: type=1403 audit(1751868302.359:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 06:05:02.931610 systemd[1]: Successfully loaded SELinux policy in 32.058ms. Jul 7 06:05:02.931629 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.261ms. Jul 7 06:05:02.931641 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 06:05:02.931652 systemd[1]: Detected virtualization kvm. Jul 7 06:05:02.931662 systemd[1]: Detected architecture arm64. Jul 7 06:05:02.931673 systemd[1]: Detected first boot. Jul 7 06:05:02.931683 systemd[1]: Initializing machine ID from VM UUID. Jul 7 06:05:02.931696 zram_generator::config[1045]: No configuration found. Jul 7 06:05:02.931708 systemd[1]: Populated /etc with preset unit settings. Jul 7 06:05:02.931720 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 06:05:02.931736 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 06:05:02.931747 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 06:05:02.931759 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 06:05:02.931769 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 06:05:02.931780 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 06:05:02.931791 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 06:05:02.931802 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 06:05:02.931815 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 06:05:02.931826 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 06:05:02.931837 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 06:05:02.931850 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:05:02.931862 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:05:02.931873 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 06:05:02.931883 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 06:05:02.931894 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 06:05:02.931910 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:05:02.931922 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 7 06:05:02.931933 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:05:02.931945 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 06:05:02.931955 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 06:05:02.931966 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 06:05:02.931976 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 06:05:02.931986 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:05:02.931997 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:05:02.932009 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:05:02.932020 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:05:02.932030 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 06:05:02.932040 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 06:05:02.932051 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:05:02.932061 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:05:02.932072 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:05:02.932082 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 06:05:02.932093 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 06:05:02.932105 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 06:05:02.932115 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 06:05:02.932126 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 06:05:02.932136 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 06:05:02.932147 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 06:05:02.932158 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 06:05:02.932169 systemd[1]: Reached target machines.target - Containers. Jul 7 06:05:02.932180 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 06:05:02.932193 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:05:02.932203 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:05:02.932214 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 06:05:02.932224 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:05:02.932235 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:05:02.932246 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:05:02.932256 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 06:05:02.932267 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:05:02.932278 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 06:05:02.932290 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 06:05:02.932301 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 06:05:02.932311 kernel: fuse: init (API version 7.39) Jul 7 06:05:02.932438 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 06:05:02.932452 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 06:05:02.932463 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:05:02.932473 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:05:02.932484 kernel: loop: module loaded Jul 7 06:05:02.932494 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:05:02.932507 kernel: ACPI: bus type drm_connector registered Jul 7 06:05:02.932518 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 06:05:02.932529 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:05:02.932546 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 06:05:02.932577 systemd-journald[1116]: Collecting audit messages is disabled. Jul 7 06:05:02.932599 systemd[1]: Stopped verity-setup.service. Jul 7 06:05:02.932610 systemd-journald[1116]: Journal started Jul 7 06:05:02.932632 systemd-journald[1116]: Runtime Journal (/run/log/journal/1b81241a373c422d9d4c1be8983763f8) is 5.9M, max 47.3M, 41.4M free. Jul 7 06:05:02.722943 systemd[1]: Queued start job for default target multi-user.target. Jul 7 06:05:02.740314 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 7 06:05:02.740697 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 06:05:02.935221 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:05:02.935886 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 06:05:02.937033 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 06:05:02.938253 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 06:05:02.939361 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 06:05:02.940524 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 06:05:02.941829 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 06:05:02.944370 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 06:05:02.945831 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:05:02.947440 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 06:05:02.947595 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 06:05:02.948997 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:05:02.949132 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:05:02.951834 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:05:02.952002 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:05:02.953311 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:05:02.953487 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:05:02.954956 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 06:05:02.955118 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 06:05:02.956603 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:05:02.956751 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:05:02.958151 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:05:02.959685 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:05:02.961185 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 06:05:02.975059 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:05:02.983425 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 06:05:02.985432 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 06:05:02.986507 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 06:05:02.986554 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:05:02.988470 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 7 06:05:02.990643 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 06:05:02.992834 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 06:05:02.993911 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:05:02.995094 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 06:05:02.996989 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 06:05:02.998221 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:05:02.999287 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 06:05:03.000417 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:05:03.004473 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:05:03.005565 systemd-journald[1116]: Time spent on flushing to /var/log/journal/1b81241a373c422d9d4c1be8983763f8 is 31.355ms for 854 entries. Jul 7 06:05:03.005565 systemd-journald[1116]: System Journal (/var/log/journal/1b81241a373c422d9d4c1be8983763f8) is 8.0M, max 195.6M, 187.6M free. Jul 7 06:05:03.043040 systemd-journald[1116]: Received client request to flush runtime journal. Jul 7 06:05:03.044708 kernel: loop0: detected capacity change from 0 to 114328 Jul 7 06:05:03.044732 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 06:05:03.009222 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 06:05:03.015463 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:05:03.017967 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:05:03.020981 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 06:05:03.022584 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 06:05:03.025358 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 06:05:03.026753 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 06:05:03.031113 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 06:05:03.040533 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 7 06:05:03.042213 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Jul 7 06:05:03.042224 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Jul 7 06:05:03.045460 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 7 06:05:03.047941 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 06:05:03.052056 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:05:03.059216 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:05:03.065514 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 06:05:03.067335 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 06:05:03.067956 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 7 06:05:03.068586 kernel: loop1: detected capacity change from 0 to 114432 Jul 7 06:05:03.073185 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 7 06:05:03.095334 kernel: loop2: detected capacity change from 0 to 203944 Jul 7 06:05:03.101756 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 06:05:03.115481 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:05:03.123380 kernel: loop3: detected capacity change from 0 to 114328 Jul 7 06:05:03.128335 kernel: loop4: detected capacity change from 0 to 114432 Jul 7 06:05:03.130121 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jul 7 06:05:03.130141 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jul 7 06:05:03.133333 kernel: loop5: detected capacity change from 0 to 203944 Jul 7 06:05:03.134703 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:05:03.137795 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 7 06:05:03.138157 (sd-merge)[1182]: Merged extensions into '/usr'. Jul 7 06:05:03.144492 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 06:05:03.144508 systemd[1]: Reloading... Jul 7 06:05:03.215556 zram_generator::config[1218]: No configuration found. Jul 7 06:05:03.290020 ldconfig[1151]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 06:05:03.304819 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:05:03.344968 systemd[1]: Reloading finished in 200 ms. Jul 7 06:05:03.375936 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 06:05:03.379457 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 06:05:03.393494 systemd[1]: Starting ensure-sysext.service... Jul 7 06:05:03.395439 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:05:03.404014 systemd[1]: Reloading requested from client PID 1243 ('systemctl') (unit ensure-sysext.service)... Jul 7 06:05:03.404030 systemd[1]: Reloading... Jul 7 06:05:03.412818 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 06:05:03.413076 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 06:05:03.413755 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 06:05:03.413968 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Jul 7 06:05:03.414023 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Jul 7 06:05:03.416202 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:05:03.416218 systemd-tmpfiles[1244]: Skipping /boot Jul 7 06:05:03.423511 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:05:03.423527 systemd-tmpfiles[1244]: Skipping /boot Jul 7 06:05:03.451478 zram_generator::config[1271]: No configuration found. Jul 7 06:05:03.545424 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:05:03.586285 systemd[1]: Reloading finished in 181 ms. Jul 7 06:05:03.599585 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 06:05:03.613754 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:05:03.622296 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 06:05:03.624978 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 06:05:03.627590 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 06:05:03.631636 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:05:03.640306 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:05:03.644742 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 06:05:03.648615 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:05:03.654687 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:05:03.658369 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:05:03.662641 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:05:03.664615 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:05:03.665435 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 06:05:03.667061 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:05:03.667176 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:05:03.668883 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:05:03.669014 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:05:03.670533 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:05:03.670656 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:05:03.677396 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:05:03.677630 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:05:03.686663 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 06:05:03.688135 systemd-udevd[1315]: Using default interface naming scheme 'v255'. Jul 7 06:05:03.693626 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 06:05:03.697257 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 06:05:03.699102 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 06:05:03.701129 augenrules[1339]: No rules Jul 7 06:05:03.701262 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 06:05:03.702982 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 06:05:03.705025 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:05:03.722347 systemd[1]: Finished ensure-sysext.service. Jul 7 06:05:03.726476 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:05:03.742659 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:05:03.746447 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:05:03.750490 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:05:03.753828 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:05:03.754932 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:05:03.756402 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:05:03.760455 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 06:05:03.761589 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 06:05:03.761850 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 06:05:03.765933 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:05:03.766081 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:05:03.767480 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:05:03.767615 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:05:03.768889 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:05:03.769019 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:05:03.772988 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 7 06:05:03.779466 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:05:03.792142 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:05:03.792342 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:05:03.794410 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1347) Jul 7 06:05:03.794750 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:05:03.830681 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 06:05:03.843468 systemd-networkd[1379]: lo: Link UP Jul 7 06:05:03.843477 systemd-networkd[1379]: lo: Gained carrier Jul 7 06:05:03.843671 systemd-resolved[1314]: Positive Trust Anchors: Jul 7 06:05:03.843680 systemd-resolved[1314]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:05:03.843712 systemd-resolved[1314]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:05:03.844133 systemd-networkd[1379]: Enumeration completed Jul 7 06:05:03.844585 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 06:05:03.846449 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:05:03.847735 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 06:05:03.848990 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:05:03.848999 systemd-networkd[1379]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:05:03.849575 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 06:05:03.851680 systemd-networkd[1379]: eth0: Link UP Jul 7 06:05:03.851685 systemd-networkd[1379]: eth0: Gained carrier Jul 7 06:05:03.851698 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:05:03.853172 systemd-resolved[1314]: Defaulting to hostname 'linux'. Jul 7 06:05:03.853828 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 06:05:03.856993 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:05:03.858102 systemd[1]: Reached target network.target - Network. Jul 7 06:05:03.858953 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:05:03.866637 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 06:05:03.868088 systemd-networkd[1379]: eth0: DHCPv4 address 10.0.0.91/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 06:05:03.868845 systemd-timesyncd[1380]: Network configuration changed, trying to establish connection. Jul 7 06:05:03.871634 systemd-timesyncd[1380]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 7 06:05:03.871689 systemd-timesyncd[1380]: Initial clock synchronization to Mon 2025-07-07 06:05:04.037490 UTC. Jul 7 06:05:03.883580 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:05:03.890364 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 7 06:05:03.893085 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 7 06:05:03.928993 lvm[1401]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 06:05:03.933613 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:05:03.969870 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 7 06:05:03.971375 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:05:03.972420 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:05:03.973521 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 06:05:03.974717 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 06:05:03.976172 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 06:05:03.977335 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 06:05:03.978517 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 06:05:03.979743 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 06:05:03.979779 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:05:03.980685 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:05:03.982079 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 06:05:03.984423 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 06:05:03.998351 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 06:05:04.000503 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 7 06:05:04.002068 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 06:05:04.003258 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:05:04.004206 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:05:04.005190 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:05:04.005224 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:05:04.006151 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 06:05:04.008165 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 06:05:04.009100 lvm[1410]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 06:05:04.011158 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 06:05:04.016128 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 06:05:04.017218 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 06:05:04.019510 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 06:05:04.024034 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 06:05:04.027591 jq[1413]: false Jul 7 06:05:04.028062 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 06:05:04.030428 extend-filesystems[1414]: Found loop3 Jul 7 06:05:04.030428 extend-filesystems[1414]: Found loop4 Jul 7 06:05:04.030428 extend-filesystems[1414]: Found loop5 Jul 7 06:05:04.030428 extend-filesystems[1414]: Found vda Jul 7 06:05:04.037651 extend-filesystems[1414]: Found vda1 Jul 7 06:05:04.037651 extend-filesystems[1414]: Found vda2 Jul 7 06:05:04.037651 extend-filesystems[1414]: Found vda3 Jul 7 06:05:04.037651 extend-filesystems[1414]: Found usr Jul 7 06:05:04.037651 extend-filesystems[1414]: Found vda4 Jul 7 06:05:04.037651 extend-filesystems[1414]: Found vda6 Jul 7 06:05:04.037651 extend-filesystems[1414]: Found vda7 Jul 7 06:05:04.037651 extend-filesystems[1414]: Found vda9 Jul 7 06:05:04.037651 extend-filesystems[1414]: Checking size of /dev/vda9 Jul 7 06:05:04.031041 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 06:05:04.051263 dbus-daemon[1412]: [system] SELinux support is enabled Jul 7 06:05:04.034525 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 06:05:04.038833 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 06:05:04.039228 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 06:05:04.040758 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 06:05:04.043855 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 06:05:04.046017 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 7 06:05:04.054759 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 06:05:04.056385 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 06:05:04.056703 jq[1427]: true Jul 7 06:05:04.057471 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 06:05:04.061169 extend-filesystems[1414]: Resized partition /dev/vda9 Jul 7 06:05:04.062964 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 06:05:04.063160 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 06:05:04.070960 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 06:05:04.071017 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 06:05:04.076942 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 06:05:04.076984 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 06:05:04.081374 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1361) Jul 7 06:05:04.085404 extend-filesystems[1437]: resize2fs 1.47.1 (20-May-2024) Jul 7 06:05:04.090530 jq[1438]: true Jul 7 06:05:04.089824 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 06:05:04.090008 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 06:05:04.093096 (ntainerd)[1447]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 06:05:04.094641 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 7 06:05:04.099807 update_engine[1423]: I20250707 06:05:04.099582 1423 main.cc:92] Flatcar Update Engine starting Jul 7 06:05:04.103387 tar[1433]: linux-arm64/helm Jul 7 06:05:04.111718 systemd[1]: Started update-engine.service - Update Engine. Jul 7 06:05:04.112726 update_engine[1423]: I20250707 06:05:04.112143 1423 update_check_scheduler.cc:74] Next update check in 10m49s Jul 7 06:05:04.120369 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 7 06:05:04.122535 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 06:05:04.133268 extend-filesystems[1437]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 7 06:05:04.133268 extend-filesystems[1437]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 7 06:05:04.133268 extend-filesystems[1437]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 7 06:05:04.137751 extend-filesystems[1414]: Resized filesystem in /dev/vda9 Jul 7 06:05:04.136999 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 06:05:04.137602 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 06:05:04.142194 systemd-logind[1421]: Watching system buttons on /dev/input/event0 (Power Button) Jul 7 06:05:04.144524 systemd-logind[1421]: New seat seat0. Jul 7 06:05:04.151596 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 06:05:04.171379 bash[1466]: Updated "/home/core/.ssh/authorized_keys" Jul 7 06:05:04.174399 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 06:05:04.176665 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 7 06:05:04.208263 locksmithd[1457]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 06:05:04.317869 containerd[1447]: time="2025-07-07T06:05:04.317713504Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 7 06:05:04.347276 containerd[1447]: time="2025-07-07T06:05:04.347117376Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 7 06:05:04.348750 containerd[1447]: time="2025-07-07T06:05:04.348713373Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:05:04.349437 containerd[1447]: time="2025-07-07T06:05:04.348879550Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 7 06:05:04.349437 containerd[1447]: time="2025-07-07T06:05:04.348908049Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 7 06:05:04.349437 containerd[1447]: time="2025-07-07T06:05:04.349082964Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 7 06:05:04.349437 containerd[1447]: time="2025-07-07T06:05:04.349101542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 7 06:05:04.349437 containerd[1447]: time="2025-07-07T06:05:04.349154906Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:05:04.349437 containerd[1447]: time="2025-07-07T06:05:04.349168094Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 7 06:05:04.349437 containerd[1447]: time="2025-07-07T06:05:04.349350644Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:05:04.349437 containerd[1447]: time="2025-07-07T06:05:04.349366854Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 7 06:05:04.349437 containerd[1447]: time="2025-07-07T06:05:04.349380164Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:05:04.351723 containerd[1447]: time="2025-07-07T06:05:04.351637889Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 7 06:05:04.352215 containerd[1447]: time="2025-07-07T06:05:04.352190276Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 7 06:05:04.352661 containerd[1447]: time="2025-07-07T06:05:04.352636791Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 7 06:05:04.352971 containerd[1447]: time="2025-07-07T06:05:04.352945954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 06:05:04.353386 containerd[1447]: time="2025-07-07T06:05:04.353109110Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 7 06:05:04.353386 containerd[1447]: time="2025-07-07T06:05:04.353263079Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 7 06:05:04.353386 containerd[1447]: time="2025-07-07T06:05:04.353310319Z" level=info msg="metadata content store policy set" policy=shared Jul 7 06:05:04.357214 containerd[1447]: time="2025-07-07T06:05:04.357183435Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 7 06:05:04.357392 containerd[1447]: time="2025-07-07T06:05:04.357374273Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 7 06:05:04.357563 containerd[1447]: time="2025-07-07T06:05:04.357544248Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 7 06:05:04.360067 containerd[1447]: time="2025-07-07T06:05:04.357651794Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 7 06:05:04.360067 containerd[1447]: time="2025-07-07T06:05:04.357685437Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 7 06:05:04.360067 containerd[1447]: time="2025-07-07T06:05:04.357946585Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 7 06:05:04.360067 containerd[1447]: time="2025-07-07T06:05:04.358183724Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 7 06:05:04.360067 containerd[1447]: time="2025-07-07T06:05:04.358284451Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 7 06:05:04.360067 containerd[1447]: time="2025-07-07T06:05:04.358301641Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 7 06:05:04.360067 containerd[1447]: time="2025-07-07T06:05:04.358314870Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 7 06:05:04.360067 containerd[1447]: time="2025-07-07T06:05:04.358328548Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 7 06:05:04.360067 containerd[1447]: time="2025-07-07T06:05:04.358365907Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 7 06:05:04.360067 containerd[1447]: time="2025-07-07T06:05:04.358380973Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 7 06:05:04.360067 containerd[1447]: time="2025-07-07T06:05:04.358395141Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 7 06:05:04.360067 containerd[1447]: time="2025-07-07T06:05:04.358410085Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 7 06:05:04.360067 containerd[1447]: time="2025-07-07T06:05:04.358424212Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 7 06:05:04.360067 containerd[1447]: time="2025-07-07T06:05:04.358436420Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 7 06:05:04.360365 containerd[1447]: time="2025-07-07T06:05:04.358448955Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 7 06:05:04.360365 containerd[1447]: time="2025-07-07T06:05:04.358469492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 7 06:05:04.360365 containerd[1447]: time="2025-07-07T06:05:04.358483415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 7 06:05:04.360365 containerd[1447]: time="2025-07-07T06:05:04.358497297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 7 06:05:04.360365 containerd[1447]: time="2025-07-07T06:05:04.358509465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 7 06:05:04.360365 containerd[1447]: time="2025-07-07T06:05:04.358522857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 7 06:05:04.360365 containerd[1447]: time="2025-07-07T06:05:04.358536943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 7 06:05:04.360365 containerd[1447]: time="2025-07-07T06:05:04.358555806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 7 06:05:04.360365 containerd[1447]: time="2025-07-07T06:05:04.358569362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 7 06:05:04.360365 containerd[1447]: time="2025-07-07T06:05:04.358584959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 7 06:05:04.360365 containerd[1447]: time="2025-07-07T06:05:04.358599943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 7 06:05:04.360365 containerd[1447]: time="2025-07-07T06:05:04.358612519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 7 06:05:04.360365 containerd[1447]: time="2025-07-07T06:05:04.358626401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 7 06:05:04.360365 containerd[1447]: time="2025-07-07T06:05:04.358638283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 7 06:05:04.360365 containerd[1447]: time="2025-07-07T06:05:04.358654370Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 7 06:05:04.360638 containerd[1447]: time="2025-07-07T06:05:04.358674621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 7 06:05:04.360638 containerd[1447]: time="2025-07-07T06:05:04.358687033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 7 06:05:04.360638 containerd[1447]: time="2025-07-07T06:05:04.358698997Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 7 06:05:04.360638 containerd[1447]: time="2025-07-07T06:05:04.358800499Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 7 06:05:04.360638 containerd[1447]: time="2025-07-07T06:05:04.358828141Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 7 06:05:04.360638 containerd[1447]: time="2025-07-07T06:05:04.358841166Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 7 06:05:04.360638 containerd[1447]: time="2025-07-07T06:05:04.358853374Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 7 06:05:04.360638 containerd[1447]: time="2025-07-07T06:05:04.358863377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 7 06:05:04.360638 containerd[1447]: time="2025-07-07T06:05:04.358881996Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 7 06:05:04.360638 containerd[1447]: time="2025-07-07T06:05:04.358891591Z" level=info msg="NRI interface is disabled by configuration." Jul 7 06:05:04.360638 containerd[1447]: time="2025-07-07T06:05:04.358903227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 7 06:05:04.360828 containerd[1447]: time="2025-07-07T06:05:04.359247872Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 7 06:05:04.360828 containerd[1447]: time="2025-07-07T06:05:04.359309525Z" level=info msg="Connect containerd service" Jul 7 06:05:04.360828 containerd[1447]: time="2025-07-07T06:05:04.359422787Z" level=info msg="using legacy CRI server" Jul 7 06:05:04.360828 containerd[1447]: time="2025-07-07T06:05:04.359431728Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 06:05:04.360828 containerd[1447]: time="2025-07-07T06:05:04.359518451Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 7 06:05:04.360828 containerd[1447]: time="2025-07-07T06:05:04.360440347Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:05:04.360828 containerd[1447]: time="2025-07-07T06:05:04.360682304Z" level=info msg="Start subscribing containerd event" Jul 7 06:05:04.360828 containerd[1447]: time="2025-07-07T06:05:04.360731423Z" level=info msg="Start recovering state" Jul 7 06:05:04.361275 containerd[1447]: time="2025-07-07T06:05:04.361072311Z" level=info msg="Start event monitor" Jul 7 06:05:04.361275 containerd[1447]: time="2025-07-07T06:05:04.361107914Z" level=info msg="Start snapshots syncer" Jul 7 06:05:04.361275 containerd[1447]: time="2025-07-07T06:05:04.361117550Z" level=info msg="Start cni network conf syncer for default" Jul 7 06:05:04.361275 containerd[1447]: time="2025-07-07T06:05:04.361124859Z" level=info msg="Start streaming server" Jul 7 06:05:04.361904 containerd[1447]: time="2025-07-07T06:05:04.361865838Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 06:05:04.362024 containerd[1447]: time="2025-07-07T06:05:04.361995554Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 06:05:04.364529 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 06:05:04.365757 containerd[1447]: time="2025-07-07T06:05:04.364424601Z" level=info msg="containerd successfully booted in 0.048979s" Jul 7 06:05:04.468651 tar[1433]: linux-arm64/LICENSE Jul 7 06:05:04.468752 tar[1433]: linux-arm64/README.md Jul 7 06:05:04.482457 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 06:05:04.875978 sshd_keygen[1434]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 06:05:04.894961 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 06:05:04.907885 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 06:05:04.913184 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 06:05:04.914391 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 06:05:04.917146 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 06:05:04.928722 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 06:05:04.931635 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 06:05:04.933744 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 7 06:05:04.935181 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 06:05:05.186643 systemd-networkd[1379]: eth0: Gained IPv6LL Jul 7 06:05:05.189092 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 06:05:05.190892 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 06:05:05.199689 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 7 06:05:05.202022 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:05:05.204113 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 06:05:05.220214 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 7 06:05:05.220417 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 7 06:05:05.222356 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 06:05:05.225663 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 06:05:05.767504 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:05:05.769097 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 06:05:05.772998 (kubelet)[1525]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:05:05.775176 systemd[1]: Startup finished in 585ms (kernel) + 4.651s (initrd) + 3.448s (userspace) = 8.684s. Jul 7 06:05:06.205016 kubelet[1525]: E0707 06:05:06.204854 1525 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:05:06.207146 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:05:06.207304 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:05:10.938144 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 06:05:10.939278 systemd[1]: Started sshd@0-10.0.0.91:22-10.0.0.1:33172.service - OpenSSH per-connection server daemon (10.0.0.1:33172). Jul 7 06:05:11.015160 sshd[1539]: Accepted publickey for core from 10.0.0.1 port 33172 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:05:11.015999 sshd[1539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:11.027213 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 06:05:11.039615 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 06:05:11.041408 systemd-logind[1421]: New session 1 of user core. Jul 7 06:05:11.051994 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 06:05:11.067668 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 06:05:11.069993 (systemd)[1543]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 06:05:11.144132 systemd[1543]: Queued start job for default target default.target. Jul 7 06:05:11.153240 systemd[1543]: Created slice app.slice - User Application Slice. Jul 7 06:05:11.153269 systemd[1543]: Reached target paths.target - Paths. Jul 7 06:05:11.153281 systemd[1543]: Reached target timers.target - Timers. Jul 7 06:05:11.154509 systemd[1543]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 06:05:11.164380 systemd[1543]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 06:05:11.164440 systemd[1543]: Reached target sockets.target - Sockets. Jul 7 06:05:11.164452 systemd[1543]: Reached target basic.target - Basic System. Jul 7 06:05:11.164485 systemd[1543]: Reached target default.target - Main User Target. Jul 7 06:05:11.164513 systemd[1543]: Startup finished in 89ms. Jul 7 06:05:11.164795 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 06:05:11.166006 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 06:05:11.252241 systemd[1]: Started sshd@1-10.0.0.91:22-10.0.0.1:33178.service - OpenSSH per-connection server daemon (10.0.0.1:33178). Jul 7 06:05:11.290457 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 33178 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:05:11.291854 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:11.296555 systemd-logind[1421]: New session 2 of user core. Jul 7 06:05:11.302510 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 06:05:11.355499 sshd[1554]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:11.369071 systemd[1]: sshd@1-10.0.0.91:22-10.0.0.1:33178.service: Deactivated successfully. Jul 7 06:05:11.370466 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 06:05:11.372106 systemd-logind[1421]: Session 2 logged out. Waiting for processes to exit. Jul 7 06:05:11.374090 systemd[1]: Started sshd@2-10.0.0.91:22-10.0.0.1:33194.service - OpenSSH per-connection server daemon (10.0.0.1:33194). Jul 7 06:05:11.375696 systemd-logind[1421]: Removed session 2. Jul 7 06:05:11.413395 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 33194 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:05:11.414723 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:11.419297 systemd-logind[1421]: New session 3 of user core. Jul 7 06:05:11.430552 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 06:05:11.478958 sshd[1561]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:11.487823 systemd[1]: sshd@2-10.0.0.91:22-10.0.0.1:33194.service: Deactivated successfully. Jul 7 06:05:11.490592 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 06:05:11.492485 systemd-logind[1421]: Session 3 logged out. Waiting for processes to exit. Jul 7 06:05:11.493044 systemd[1]: Started sshd@3-10.0.0.91:22-10.0.0.1:33210.service - OpenSSH per-connection server daemon (10.0.0.1:33210). Jul 7 06:05:11.493738 systemd-logind[1421]: Removed session 3. Jul 7 06:05:11.533333 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 33210 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:05:11.534817 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:11.538308 systemd-logind[1421]: New session 4 of user core. Jul 7 06:05:11.550497 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 06:05:11.602859 sshd[1568]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:11.616836 systemd[1]: sshd@3-10.0.0.91:22-10.0.0.1:33210.service: Deactivated successfully. Jul 7 06:05:11.618247 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 06:05:11.621395 systemd-logind[1421]: Session 4 logged out. Waiting for processes to exit. Jul 7 06:05:11.634605 systemd[1]: Started sshd@4-10.0.0.91:22-10.0.0.1:33214.service - OpenSSH per-connection server daemon (10.0.0.1:33214). Jul 7 06:05:11.635751 systemd-logind[1421]: Removed session 4. Jul 7 06:05:11.668949 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 33214 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:05:11.670122 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:11.674394 systemd-logind[1421]: New session 5 of user core. Jul 7 06:05:11.683469 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 06:05:11.746967 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 06:05:11.747250 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:05:11.762175 sudo[1578]: pam_unix(sudo:session): session closed for user root Jul 7 06:05:11.763862 sshd[1575]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:11.780896 systemd[1]: sshd@4-10.0.0.91:22-10.0.0.1:33214.service: Deactivated successfully. Jul 7 06:05:11.782322 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 06:05:11.785182 systemd-logind[1421]: Session 5 logged out. Waiting for processes to exit. Jul 7 06:05:11.792624 systemd[1]: Started sshd@5-10.0.0.91:22-10.0.0.1:33216.service - OpenSSH per-connection server daemon (10.0.0.1:33216). Jul 7 06:05:11.793418 systemd-logind[1421]: Removed session 5. Jul 7 06:05:11.827172 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 33216 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:05:11.828622 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:11.832743 systemd-logind[1421]: New session 6 of user core. Jul 7 06:05:11.840478 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 06:05:11.892160 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 06:05:11.892475 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:05:11.895619 sudo[1587]: pam_unix(sudo:session): session closed for user root Jul 7 06:05:11.900185 sudo[1586]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 7 06:05:11.900523 sudo[1586]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:05:11.921691 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 7 06:05:11.922933 auditctl[1590]: No rules Jul 7 06:05:11.923800 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:05:11.924043 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 7 06:05:11.928075 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 06:05:11.952924 augenrules[1608]: No rules Jul 7 06:05:11.953669 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 06:05:11.955556 sudo[1586]: pam_unix(sudo:session): session closed for user root Jul 7 06:05:11.957259 sshd[1583]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:11.969899 systemd[1]: sshd@5-10.0.0.91:22-10.0.0.1:33216.service: Deactivated successfully. Jul 7 06:05:11.971503 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 06:05:11.973444 systemd-logind[1421]: Session 6 logged out. Waiting for processes to exit. Jul 7 06:05:11.986595 systemd[1]: Started sshd@6-10.0.0.91:22-10.0.0.1:33218.service - OpenSSH per-connection server daemon (10.0.0.1:33218). Jul 7 06:05:11.987659 systemd-logind[1421]: Removed session 6. Jul 7 06:05:12.020793 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 33218 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:05:12.022187 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:05:12.026176 systemd-logind[1421]: New session 7 of user core. Jul 7 06:05:12.038559 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 06:05:12.089360 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 06:05:12.089984 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:05:12.402610 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 06:05:12.402735 (dockerd)[1638]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 06:05:12.672349 dockerd[1638]: time="2025-07-07T06:05:12.672197669Z" level=info msg="Starting up" Jul 7 06:05:12.820421 dockerd[1638]: time="2025-07-07T06:05:12.820373426Z" level=info msg="Loading containers: start." Jul 7 06:05:12.920950 kernel: Initializing XFRM netlink socket Jul 7 06:05:12.982203 systemd-networkd[1379]: docker0: Link UP Jul 7 06:05:13.000471 dockerd[1638]: time="2025-07-07T06:05:13.000415244Z" level=info msg="Loading containers: done." Jul 7 06:05:13.015318 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck52546712-merged.mount: Deactivated successfully. Jul 7 06:05:13.017128 dockerd[1638]: time="2025-07-07T06:05:13.016968781Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 06:05:13.017128 dockerd[1638]: time="2025-07-07T06:05:13.017107521Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 7 06:05:13.017278 dockerd[1638]: time="2025-07-07T06:05:13.017209392Z" level=info msg="Daemon has completed initialization" Jul 7 06:05:13.042968 dockerd[1638]: time="2025-07-07T06:05:13.042844815Z" level=info msg="API listen on /run/docker.sock" Jul 7 06:05:13.043047 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 06:05:13.689814 containerd[1447]: time="2025-07-07T06:05:13.689770375Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 7 06:05:14.309085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1372109700.mount: Deactivated successfully. Jul 7 06:05:15.280352 containerd[1447]: time="2025-07-07T06:05:15.280241872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:15.281140 containerd[1447]: time="2025-07-07T06:05:15.280894413Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651795" Jul 7 06:05:15.281892 containerd[1447]: time="2025-07-07T06:05:15.281855621Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:15.284789 containerd[1447]: time="2025-07-07T06:05:15.284748570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:15.286910 containerd[1447]: time="2025-07-07T06:05:15.286878976Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 1.597067318s" Jul 7 06:05:15.287204 containerd[1447]: time="2025-07-07T06:05:15.286992957Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 7 06:05:15.290041 containerd[1447]: time="2025-07-07T06:05:15.290017411Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 7 06:05:16.337735 containerd[1447]: time="2025-07-07T06:05:16.337678221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:16.338104 containerd[1447]: time="2025-07-07T06:05:16.338064146Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459679" Jul 7 06:05:16.338988 containerd[1447]: time="2025-07-07T06:05:16.338964292Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:16.341772 containerd[1447]: time="2025-07-07T06:05:16.341744099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:16.343008 containerd[1447]: time="2025-07-07T06:05:16.342973776Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.052889532s" Jul 7 06:05:16.343053 containerd[1447]: time="2025-07-07T06:05:16.343009806Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 7 06:05:16.343559 containerd[1447]: time="2025-07-07T06:05:16.343531577Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 7 06:05:16.457726 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 06:05:16.467549 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:05:16.564162 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:05:16.568254 (kubelet)[1853]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:05:16.608456 kubelet[1853]: E0707 06:05:16.608315 1853 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:05:16.611469 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:05:16.611618 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:05:17.566252 containerd[1447]: time="2025-07-07T06:05:17.566203032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:17.567239 containerd[1447]: time="2025-07-07T06:05:17.567167384Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125068" Jul 7 06:05:17.568160 containerd[1447]: time="2025-07-07T06:05:17.568091551Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:17.571004 containerd[1447]: time="2025-07-07T06:05:17.570946913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:17.572283 containerd[1447]: time="2025-07-07T06:05:17.572151701Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.228583779s" Jul 7 06:05:17.572283 containerd[1447]: time="2025-07-07T06:05:17.572189158Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 7 06:05:17.573005 containerd[1447]: time="2025-07-07T06:05:17.572789987Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 7 06:05:18.460034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3271738370.mount: Deactivated successfully. Jul 7 06:05:18.814682 containerd[1447]: time="2025-07-07T06:05:18.814525751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:18.815271 containerd[1447]: time="2025-07-07T06:05:18.815248054Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915959" Jul 7 06:05:18.815850 containerd[1447]: time="2025-07-07T06:05:18.815794477Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:18.817886 containerd[1447]: time="2025-07-07T06:05:18.817839759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:18.818818 containerd[1447]: time="2025-07-07T06:05:18.818667920Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.245846503s" Jul 7 06:05:18.818818 containerd[1447]: time="2025-07-07T06:05:18.818705600Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 7 06:05:18.819379 containerd[1447]: time="2025-07-07T06:05:18.819355071Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 06:05:19.440632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount897411677.mount: Deactivated successfully. Jul 7 06:05:20.113101 containerd[1447]: time="2025-07-07T06:05:20.113044290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:20.113712 containerd[1447]: time="2025-07-07T06:05:20.113668295Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 7 06:05:20.114443 containerd[1447]: time="2025-07-07T06:05:20.114412192Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:20.118010 containerd[1447]: time="2025-07-07T06:05:20.117976902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:20.120288 containerd[1447]: time="2025-07-07T06:05:20.120252943Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.300865576s" Jul 7 06:05:20.120333 containerd[1447]: time="2025-07-07T06:05:20.120288269Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 7 06:05:20.120834 containerd[1447]: time="2025-07-07T06:05:20.120807137Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 06:05:20.662162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1457377229.mount: Deactivated successfully. Jul 7 06:05:20.667275 containerd[1447]: time="2025-07-07T06:05:20.667105010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:20.667780 containerd[1447]: time="2025-07-07T06:05:20.667588913Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 7 06:05:20.668352 containerd[1447]: time="2025-07-07T06:05:20.668311879Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:20.670611 containerd[1447]: time="2025-07-07T06:05:20.670581063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:20.672086 containerd[1447]: time="2025-07-07T06:05:20.672050935Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 551.208833ms" Jul 7 06:05:20.672086 containerd[1447]: time="2025-07-07T06:05:20.672081289Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 7 06:05:20.672664 containerd[1447]: time="2025-07-07T06:05:20.672461338Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 7 06:05:21.191660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1972742627.mount: Deactivated successfully. Jul 7 06:05:22.691820 containerd[1447]: time="2025-07-07T06:05:22.691757018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:22.692229 containerd[1447]: time="2025-07-07T06:05:22.692189587Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Jul 7 06:05:22.693269 containerd[1447]: time="2025-07-07T06:05:22.693224364Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:22.696578 containerd[1447]: time="2025-07-07T06:05:22.696520013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:22.697964 containerd[1447]: time="2025-07-07T06:05:22.697875470Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.025380054s" Jul 7 06:05:22.697964 containerd[1447]: time="2025-07-07T06:05:22.697915946Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 7 06:05:26.862036 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 06:05:26.872484 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:05:27.015229 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:05:27.018859 (kubelet)[2016]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:05:27.054820 kubelet[2016]: E0707 06:05:27.054780 2016 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:05:27.057432 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:05:27.057574 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:05:27.553217 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:05:27.562548 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:05:27.582302 systemd[1]: Reloading requested from client PID 2032 ('systemctl') (unit session-7.scope)... Jul 7 06:05:27.582350 systemd[1]: Reloading... Jul 7 06:05:27.651351 zram_generator::config[2074]: No configuration found. Jul 7 06:05:27.802628 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:05:27.860914 systemd[1]: Reloading finished in 278 ms. Jul 7 06:05:27.910160 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:05:27.911448 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:05:27.913754 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 06:05:27.913934 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:05:27.915305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:05:28.025098 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:05:28.028914 (kubelet)[2118]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:05:28.066570 kubelet[2118]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:05:28.066570 kubelet[2118]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 06:05:28.066570 kubelet[2118]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:05:28.066570 kubelet[2118]: I0707 06:05:28.066540 2118 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:05:29.211822 kubelet[2118]: I0707 06:05:29.211624 2118 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 06:05:29.211822 kubelet[2118]: I0707 06:05:29.211654 2118 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:05:29.212217 kubelet[2118]: I0707 06:05:29.211890 2118 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 06:05:29.241019 kubelet[2118]: E0707 06:05:29.240976 2118 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.91:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:05:29.241827 kubelet[2118]: I0707 06:05:29.241792 2118 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:05:29.250218 kubelet[2118]: E0707 06:05:29.250020 2118 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 06:05:29.250218 kubelet[2118]: I0707 06:05:29.250046 2118 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 06:05:29.254066 kubelet[2118]: I0707 06:05:29.254006 2118 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:05:29.255061 kubelet[2118]: I0707 06:05:29.254975 2118 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 06:05:29.255181 kubelet[2118]: I0707 06:05:29.255146 2118 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:05:29.255333 kubelet[2118]: I0707 06:05:29.255170 2118 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:05:29.255418 kubelet[2118]: I0707 06:05:29.255340 2118 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:05:29.255418 kubelet[2118]: I0707 06:05:29.255350 2118 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 06:05:29.255599 kubelet[2118]: I0707 06:05:29.255573 2118 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:05:29.257746 kubelet[2118]: I0707 06:05:29.257694 2118 kubelet.go:408] "Attempting to sync node with API server" Jul 7 06:05:29.257746 kubelet[2118]: I0707 06:05:29.257729 2118 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:05:29.257746 kubelet[2118]: I0707 06:05:29.257750 2118 kubelet.go:314] "Adding apiserver pod source" Jul 7 06:05:29.257866 kubelet[2118]: I0707 06:05:29.257824 2118 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:05:29.266042 kubelet[2118]: W0707 06:05:29.265991 2118 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jul 7 06:05:29.266152 kubelet[2118]: E0707 06:05:29.266049 2118 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:05:29.266329 kubelet[2118]: W0707 06:05:29.265981 2118 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jul 7 06:05:29.266329 kubelet[2118]: E0707 06:05:29.266292 2118 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:05:29.266541 kubelet[2118]: I0707 06:05:29.266508 2118 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 06:05:29.267206 kubelet[2118]: I0707 06:05:29.267184 2118 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 06:05:29.269278 kubelet[2118]: W0707 06:05:29.269236 2118 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 06:05:29.270492 kubelet[2118]: I0707 06:05:29.270086 2118 server.go:1274] "Started kubelet" Jul 7 06:05:29.270492 kubelet[2118]: I0707 06:05:29.270357 2118 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:05:29.270932 kubelet[2118]: I0707 06:05:29.270878 2118 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:05:29.271214 kubelet[2118]: I0707 06:05:29.271184 2118 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:05:29.272977 kubelet[2118]: I0707 06:05:29.272688 2118 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:05:29.272977 kubelet[2118]: I0707 06:05:29.272833 2118 server.go:449] "Adding debug handlers to kubelet server" Jul 7 06:05:29.273569 kubelet[2118]: I0707 06:05:29.273519 2118 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:05:29.274359 kubelet[2118]: E0707 06:05:29.273310 2118 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.91:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.91:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fe2f6b36621fa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-07 06:05:29.270059514 +0000 UTC m=+1.238213791,LastTimestamp:2025-07-07 06:05:29.270059514 +0000 UTC m=+1.238213791,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 7 06:05:29.274359 kubelet[2118]: I0707 06:05:29.273771 2118 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 06:05:29.274475 kubelet[2118]: W0707 06:05:29.274036 2118 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jul 7 06:05:29.274475 kubelet[2118]: I0707 06:05:29.274391 2118 factory.go:221] Registration of the systemd container factory successfully Jul 7 06:05:29.274475 kubelet[2118]: I0707 06:05:29.273765 2118 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 06:05:29.274475 kubelet[2118]: I0707 06:05:29.274460 2118 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:05:29.274641 kubelet[2118]: E0707 06:05:29.274392 2118 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:05:29.275457 kubelet[2118]: E0707 06:05:29.275220 2118 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:05:29.275665 kubelet[2118]: I0707 06:05:29.275651 2118 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:05:29.276015 kubelet[2118]: I0707 06:05:29.275994 2118 factory.go:221] Registration of the containerd container factory successfully Jul 7 06:05:29.276099 kubelet[2118]: E0707 06:05:29.276084 2118 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:05:29.276269 kubelet[2118]: E0707 06:05:29.276233 2118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="200ms" Jul 7 06:05:29.285463 kubelet[2118]: I0707 06:05:29.285409 2118 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 06:05:29.286786 kubelet[2118]: I0707 06:05:29.286493 2118 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 06:05:29.286786 kubelet[2118]: I0707 06:05:29.286514 2118 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 06:05:29.286786 kubelet[2118]: I0707 06:05:29.286533 2118 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 06:05:29.286786 kubelet[2118]: E0707 06:05:29.286568 2118 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:05:29.289160 kubelet[2118]: I0707 06:05:29.289122 2118 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 06:05:29.289160 kubelet[2118]: I0707 06:05:29.289137 2118 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 06:05:29.289160 kubelet[2118]: I0707 06:05:29.289162 2118 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:05:29.290169 kubelet[2118]: W0707 06:05:29.290030 2118 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jul 7 06:05:29.290169 kubelet[2118]: E0707 06:05:29.290082 2118 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:05:29.376770 kubelet[2118]: E0707 06:05:29.376723 2118 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:05:29.386989 kubelet[2118]: E0707 06:05:29.386927 2118 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 06:05:29.478003 kubelet[2118]: E0707 06:05:29.477303 2118 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:05:29.478003 kubelet[2118]: E0707 06:05:29.477759 2118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="400ms" Jul 7 06:05:29.491960 kubelet[2118]: I0707 06:05:29.491743 2118 policy_none.go:49] "None policy: Start" Jul 7 06:05:29.492746 kubelet[2118]: I0707 06:05:29.492710 2118 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 06:05:29.493019 kubelet[2118]: I0707 06:05:29.493010 2118 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:05:29.500814 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 06:05:29.518943 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 06:05:29.522309 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 06:05:29.533386 kubelet[2118]: I0707 06:05:29.533346 2118 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 06:05:29.533781 kubelet[2118]: I0707 06:05:29.533546 2118 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:05:29.533781 kubelet[2118]: I0707 06:05:29.533566 2118 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:05:29.536146 kubelet[2118]: I0707 06:05:29.534564 2118 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:05:29.536828 kubelet[2118]: E0707 06:05:29.536583 2118 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 7 06:05:29.595988 systemd[1]: Created slice kubepods-burstable-podcd8bbc50091b2a906fc60226b64748e0.slice - libcontainer container kubepods-burstable-podcd8bbc50091b2a906fc60226b64748e0.slice. Jul 7 06:05:29.608660 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 7 06:05:29.624672 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 7 06:05:29.635331 kubelet[2118]: I0707 06:05:29.635291 2118 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 06:05:29.635805 kubelet[2118]: E0707 06:05:29.635780 2118 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Jul 7 06:05:29.678582 kubelet[2118]: I0707 06:05:29.678337 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:05:29.678582 kubelet[2118]: I0707 06:05:29.678375 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:05:29.678582 kubelet[2118]: I0707 06:05:29.678401 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:05:29.678582 kubelet[2118]: I0707 06:05:29.678423 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:05:29.678582 kubelet[2118]: I0707 06:05:29.678457 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:05:29.678847 kubelet[2118]: I0707 06:05:29.678479 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 7 06:05:29.678847 kubelet[2118]: I0707 06:05:29.678493 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd8bbc50091b2a906fc60226b64748e0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cd8bbc50091b2a906fc60226b64748e0\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:05:29.678847 kubelet[2118]: I0707 06:05:29.678507 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd8bbc50091b2a906fc60226b64748e0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cd8bbc50091b2a906fc60226b64748e0\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:05:29.678847 kubelet[2118]: I0707 06:05:29.678522 2118 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd8bbc50091b2a906fc60226b64748e0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cd8bbc50091b2a906fc60226b64748e0\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:05:29.838068 kubelet[2118]: I0707 06:05:29.837960 2118 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 06:05:29.838734 kubelet[2118]: E0707 06:05:29.838337 2118 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Jul 7 06:05:29.879018 kubelet[2118]: E0707 06:05:29.878956 2118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="800ms" Jul 7 06:05:29.908459 kubelet[2118]: E0707 06:05:29.908433 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:29.909105 containerd[1447]: time="2025-07-07T06:05:29.909053369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cd8bbc50091b2a906fc60226b64748e0,Namespace:kube-system,Attempt:0,}" Jul 7 06:05:29.923950 kubelet[2118]: E0707 06:05:29.923856 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:29.924417 containerd[1447]: time="2025-07-07T06:05:29.924374683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 7 06:05:29.927354 kubelet[2118]: E0707 06:05:29.927280 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:29.927944 containerd[1447]: time="2025-07-07T06:05:29.927699970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 7 06:05:30.240083 kubelet[2118]: I0707 06:05:30.239984 2118 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 06:05:30.240393 kubelet[2118]: E0707 06:05:30.240314 2118 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Jul 7 06:05:30.374641 kubelet[2118]: W0707 06:05:30.374561 2118 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jul 7 06:05:30.374641 kubelet[2118]: E0707 06:05:30.374634 2118 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.91:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:05:30.406357 kubelet[2118]: W0707 06:05:30.406269 2118 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jul 7 06:05:30.406469 kubelet[2118]: E0707 06:05:30.406369 2118 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:05:30.435091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount600255898.mount: Deactivated successfully. Jul 7 06:05:30.439218 kubelet[2118]: W0707 06:05:30.439130 2118 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jul 7 06:05:30.439293 kubelet[2118]: E0707 06:05:30.439234 2118 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:05:30.442976 containerd[1447]: time="2025-07-07T06:05:30.442894922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:05:30.444918 containerd[1447]: time="2025-07-07T06:05:30.444876078Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:05:30.445992 containerd[1447]: time="2025-07-07T06:05:30.445950530Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:05:30.447292 containerd[1447]: time="2025-07-07T06:05:30.447255490Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 06:05:30.447865 containerd[1447]: time="2025-07-07T06:05:30.447674240Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 7 06:05:30.448708 containerd[1447]: time="2025-07-07T06:05:30.448333024Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 06:05:30.450364 containerd[1447]: time="2025-07-07T06:05:30.449408437Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:05:30.453358 containerd[1447]: time="2025-07-07T06:05:30.453301223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:05:30.454229 containerd[1447]: time="2025-07-07T06:05:30.454200642Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 529.741618ms" Jul 7 06:05:30.455476 containerd[1447]: time="2025-07-07T06:05:30.455448006Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 527.677785ms" Jul 7 06:05:30.456665 containerd[1447]: time="2025-07-07T06:05:30.456619080Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 547.48221ms" Jul 7 06:05:30.588833 containerd[1447]: time="2025-07-07T06:05:30.587934880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:05:30.588833 containerd[1447]: time="2025-07-07T06:05:30.588025298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:05:30.588833 containerd[1447]: time="2025-07-07T06:05:30.588080814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:05:30.588833 containerd[1447]: time="2025-07-07T06:05:30.588186882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:05:30.590750 containerd[1447]: time="2025-07-07T06:05:30.590207703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:05:30.590936 containerd[1447]: time="2025-07-07T06:05:30.590265901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:05:30.590977 containerd[1447]: time="2025-07-07T06:05:30.590933851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:05:30.591274 containerd[1447]: time="2025-07-07T06:05:30.591227480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:05:30.592255 containerd[1447]: time="2025-07-07T06:05:30.592156678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:05:30.592255 containerd[1447]: time="2025-07-07T06:05:30.592197385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:05:30.592255 containerd[1447]: time="2025-07-07T06:05:30.592215636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:05:30.592404 containerd[1447]: time="2025-07-07T06:05:30.592276876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:05:30.616510 systemd[1]: Started cri-containerd-744a36425da45d15256d9eb15fd4d963422cf1a2adfb235a286ef3591777c288.scope - libcontainer container 744a36425da45d15256d9eb15fd4d963422cf1a2adfb235a286ef3591777c288. Jul 7 06:05:30.618054 systemd[1]: Started cri-containerd-b955498a294ccc95ad8e3e7832e9f4ca16422ae4b3d758ae26e8a2a545689f5e.scope - libcontainer container b955498a294ccc95ad8e3e7832e9f4ca16422ae4b3d758ae26e8a2a545689f5e. Jul 7 06:05:30.620007 systemd[1]: Started cri-containerd-c21110df17a34c6a38e2b41f383b84b602e1acd5bd14345db00f99aa0f5c7aa4.scope - libcontainer container c21110df17a34c6a38e2b41f383b84b602e1acd5bd14345db00f99aa0f5c7aa4. Jul 7 06:05:30.650073 containerd[1447]: time="2025-07-07T06:05:30.650027464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cd8bbc50091b2a906fc60226b64748e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"744a36425da45d15256d9eb15fd4d963422cf1a2adfb235a286ef3591777c288\"" Jul 7 06:05:30.653637 kubelet[2118]: E0707 06:05:30.653458 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:30.656127 containerd[1447]: time="2025-07-07T06:05:30.656076519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"b955498a294ccc95ad8e3e7832e9f4ca16422ae4b3d758ae26e8a2a545689f5e\"" Jul 7 06:05:30.656342 containerd[1447]: time="2025-07-07T06:05:30.656191393Z" level=info msg="CreateContainer within sandbox \"744a36425da45d15256d9eb15fd4d963422cf1a2adfb235a286ef3591777c288\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 06:05:30.656982 kubelet[2118]: E0707 06:05:30.656753 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:30.657059 containerd[1447]: time="2025-07-07T06:05:30.656902251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"c21110df17a34c6a38e2b41f383b84b602e1acd5bd14345db00f99aa0f5c7aa4\"" Jul 7 06:05:30.657675 kubelet[2118]: E0707 06:05:30.657308 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:30.658711 containerd[1447]: time="2025-07-07T06:05:30.658678875Z" level=info msg="CreateContainer within sandbox \"b955498a294ccc95ad8e3e7832e9f4ca16422ae4b3d758ae26e8a2a545689f5e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 06:05:30.659110 containerd[1447]: time="2025-07-07T06:05:30.659069126Z" level=info msg="CreateContainer within sandbox \"c21110df17a34c6a38e2b41f383b84b602e1acd5bd14345db00f99aa0f5c7aa4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 06:05:30.680129 kubelet[2118]: E0707 06:05:30.680076 2118 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="1.6s" Jul 7 06:05:30.690950 containerd[1447]: time="2025-07-07T06:05:30.690881131Z" level=info msg="CreateContainer within sandbox \"744a36425da45d15256d9eb15fd4d963422cf1a2adfb235a286ef3591777c288\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"08ea55758e04c119746363fa7b78edffc1cc78a2318786a036c47e4572e6c49d\"" Jul 7 06:05:30.691616 containerd[1447]: time="2025-07-07T06:05:30.691562530Z" level=info msg="StartContainer for \"08ea55758e04c119746363fa7b78edffc1cc78a2318786a036c47e4572e6c49d\"" Jul 7 06:05:30.694346 containerd[1447]: time="2025-07-07T06:05:30.694283402Z" level=info msg="CreateContainer within sandbox \"c21110df17a34c6a38e2b41f383b84b602e1acd5bd14345db00f99aa0f5c7aa4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3083dced209783a4d65f0f45996f5e4df3f67c0a4a51df65cf2a052b247b19fa\"" Jul 7 06:05:30.695431 containerd[1447]: time="2025-07-07T06:05:30.694789288Z" level=info msg="StartContainer for \"3083dced209783a4d65f0f45996f5e4df3f67c0a4a51df65cf2a052b247b19fa\"" Jul 7 06:05:30.699971 containerd[1447]: time="2025-07-07T06:05:30.699770976Z" level=info msg="CreateContainer within sandbox \"b955498a294ccc95ad8e3e7832e9f4ca16422ae4b3d758ae26e8a2a545689f5e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0bb9e7de5365096f32c7b76108e25ef5edf4b8a46206c8832465c964b21dbac2\"" Jul 7 06:05:30.700434 containerd[1447]: time="2025-07-07T06:05:30.700379368Z" level=info msg="StartContainer for \"0bb9e7de5365096f32c7b76108e25ef5edf4b8a46206c8832465c964b21dbac2\"" Jul 7 06:05:30.719210 kubelet[2118]: W0707 06:05:30.719144 2118 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jul 7 06:05:30.719210 kubelet[2118]: E0707 06:05:30.719215 2118 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.91:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:05:30.721544 systemd[1]: Started cri-containerd-08ea55758e04c119746363fa7b78edffc1cc78a2318786a036c47e4572e6c49d.scope - libcontainer container 08ea55758e04c119746363fa7b78edffc1cc78a2318786a036c47e4572e6c49d. Jul 7 06:05:30.725888 systemd[1]: Started cri-containerd-0bb9e7de5365096f32c7b76108e25ef5edf4b8a46206c8832465c964b21dbac2.scope - libcontainer container 0bb9e7de5365096f32c7b76108e25ef5edf4b8a46206c8832465c964b21dbac2. Jul 7 06:05:30.727273 systemd[1]: Started cri-containerd-3083dced209783a4d65f0f45996f5e4df3f67c0a4a51df65cf2a052b247b19fa.scope - libcontainer container 3083dced209783a4d65f0f45996f5e4df3f67c0a4a51df65cf2a052b247b19fa. Jul 7 06:05:30.772111 containerd[1447]: time="2025-07-07T06:05:30.772054602Z" level=info msg="StartContainer for \"3083dced209783a4d65f0f45996f5e4df3f67c0a4a51df65cf2a052b247b19fa\" returns successfully" Jul 7 06:05:30.772245 containerd[1447]: time="2025-07-07T06:05:30.772214786Z" level=info msg="StartContainer for \"08ea55758e04c119746363fa7b78edffc1cc78a2318786a036c47e4572e6c49d\" returns successfully" Jul 7 06:05:30.772271 containerd[1447]: time="2025-07-07T06:05:30.772247567Z" level=info msg="StartContainer for \"0bb9e7de5365096f32c7b76108e25ef5edf4b8a46206c8832465c964b21dbac2\" returns successfully" Jul 7 06:05:31.042021 kubelet[2118]: I0707 06:05:31.041921 2118 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 06:05:31.294810 kubelet[2118]: E0707 06:05:31.294715 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:31.296210 kubelet[2118]: E0707 06:05:31.296186 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:31.297709 kubelet[2118]: E0707 06:05:31.297686 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:32.300895 kubelet[2118]: E0707 06:05:32.300864 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:33.096074 kubelet[2118]: E0707 06:05:33.096024 2118 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 7 06:05:33.182765 kubelet[2118]: I0707 06:05:33.182718 2118 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 7 06:05:33.182765 kubelet[2118]: E0707 06:05:33.182761 2118 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 7 06:05:33.262447 kubelet[2118]: I0707 06:05:33.262386 2118 apiserver.go:52] "Watching apiserver" Jul 7 06:05:33.275264 kubelet[2118]: I0707 06:05:33.275215 2118 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 06:05:33.306275 kubelet[2118]: E0707 06:05:33.306232 2118 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 7 06:05:33.306611 kubelet[2118]: E0707 06:05:33.306423 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:33.419921 kubelet[2118]: E0707 06:05:33.419818 2118 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:05:33.420449 kubelet[2118]: E0707 06:05:33.420377 2118 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:34.998147 systemd[1]: Reloading requested from client PID 2393 ('systemctl') (unit session-7.scope)... Jul 7 06:05:34.998171 systemd[1]: Reloading... Jul 7 06:05:35.065439 zram_generator::config[2432]: No configuration found. Jul 7 06:05:35.154123 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:05:35.225128 systemd[1]: Reloading finished in 226 ms. Jul 7 06:05:35.256945 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:05:35.270430 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 06:05:35.270663 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:05:35.270724 systemd[1]: kubelet.service: Consumed 1.602s CPU time, 130.1M memory peak, 0B memory swap peak. Jul 7 06:05:35.280652 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:05:35.377155 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:05:35.382500 (kubelet)[2474]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:05:35.420035 kubelet[2474]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:05:35.420035 kubelet[2474]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 06:05:35.420035 kubelet[2474]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:05:35.420407 kubelet[2474]: I0707 06:05:35.420042 2474 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:05:35.426009 kubelet[2474]: I0707 06:05:35.425975 2474 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 06:05:35.426009 kubelet[2474]: I0707 06:05:35.426004 2474 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:05:35.426222 kubelet[2474]: I0707 06:05:35.426197 2474 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 06:05:35.427565 kubelet[2474]: I0707 06:05:35.427541 2474 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 06:05:35.429785 kubelet[2474]: I0707 06:05:35.429601 2474 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:05:35.434336 kubelet[2474]: E0707 06:05:35.432395 2474 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 06:05:35.434336 kubelet[2474]: I0707 06:05:35.432436 2474 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 06:05:35.434835 kubelet[2474]: I0707 06:05:35.434817 2474 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:05:35.434934 kubelet[2474]: I0707 06:05:35.434918 2474 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 06:05:35.435041 kubelet[2474]: I0707 06:05:35.435017 2474 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:05:35.435470 kubelet[2474]: I0707 06:05:35.435038 2474 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:05:35.435564 kubelet[2474]: I0707 06:05:35.435475 2474 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:05:35.435564 kubelet[2474]: I0707 06:05:35.435485 2474 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 06:05:35.435564 kubelet[2474]: I0707 06:05:35.435521 2474 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:05:35.435645 kubelet[2474]: I0707 06:05:35.435614 2474 kubelet.go:408] "Attempting to sync node with API server" Jul 7 06:05:35.435645 kubelet[2474]: I0707 06:05:35.435630 2474 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:05:35.435645 kubelet[2474]: I0707 06:05:35.435645 2474 kubelet.go:314] "Adding apiserver pod source" Jul 7 06:05:35.435705 kubelet[2474]: I0707 06:05:35.435656 2474 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:05:35.436583 kubelet[2474]: I0707 06:05:35.436561 2474 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 06:05:35.437028 kubelet[2474]: I0707 06:05:35.437014 2474 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 06:05:35.438354 kubelet[2474]: I0707 06:05:35.437374 2474 server.go:1274] "Started kubelet" Jul 7 06:05:35.438354 kubelet[2474]: I0707 06:05:35.437920 2474 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:05:35.438354 kubelet[2474]: I0707 06:05:35.438127 2474 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:05:35.438354 kubelet[2474]: I0707 06:05:35.438169 2474 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:05:35.438675 kubelet[2474]: I0707 06:05:35.438653 2474 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:05:35.438960 kubelet[2474]: I0707 06:05:35.438916 2474 server.go:449] "Adding debug handlers to kubelet server" Jul 7 06:05:35.440050 kubelet[2474]: I0707 06:05:35.439927 2474 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 06:05:35.440050 kubelet[2474]: I0707 06:05:35.440000 2474 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 06:05:35.440249 kubelet[2474]: I0707 06:05:35.440105 2474 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:05:35.441464 kubelet[2474]: I0707 06:05:35.440954 2474 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:05:35.441464 kubelet[2474]: E0707 06:05:35.441392 2474 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:05:35.441464 kubelet[2474]: E0707 06:05:35.441430 2474 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:05:35.447700 kubelet[2474]: I0707 06:05:35.447671 2474 factory.go:221] Registration of the systemd container factory successfully Jul 7 06:05:35.447773 kubelet[2474]: I0707 06:05:35.447757 2474 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:05:35.462662 kubelet[2474]: I0707 06:05:35.462608 2474 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 06:05:35.463072 kubelet[2474]: I0707 06:05:35.462811 2474 factory.go:221] Registration of the containerd container factory successfully Jul 7 06:05:35.464077 kubelet[2474]: I0707 06:05:35.464045 2474 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 06:05:35.464077 kubelet[2474]: I0707 06:05:35.464074 2474 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 06:05:35.464162 kubelet[2474]: I0707 06:05:35.464092 2474 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 06:05:35.464162 kubelet[2474]: E0707 06:05:35.464129 2474 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:05:35.495205 kubelet[2474]: I0707 06:05:35.495173 2474 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 06:05:35.495205 kubelet[2474]: I0707 06:05:35.495192 2474 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 06:05:35.495205 kubelet[2474]: I0707 06:05:35.495211 2474 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:05:35.495414 kubelet[2474]: I0707 06:05:35.495374 2474 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 06:05:35.495414 kubelet[2474]: I0707 06:05:35.495385 2474 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 06:05:35.495414 kubelet[2474]: I0707 06:05:35.495402 2474 policy_none.go:49] "None policy: Start" Jul 7 06:05:35.495929 kubelet[2474]: I0707 06:05:35.495902 2474 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 06:05:35.495929 kubelet[2474]: I0707 06:05:35.495924 2474 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:05:35.496076 kubelet[2474]: I0707 06:05:35.496054 2474 state_mem.go:75] "Updated machine memory state" Jul 7 06:05:35.499580 kubelet[2474]: I0707 06:05:35.499557 2474 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 06:05:35.499885 kubelet[2474]: I0707 06:05:35.499725 2474 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:05:35.499885 kubelet[2474]: I0707 06:05:35.499741 2474 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:05:35.499971 kubelet[2474]: I0707 06:05:35.499933 2474 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:05:35.604537 kubelet[2474]: I0707 06:05:35.604430 2474 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 06:05:35.610460 kubelet[2474]: I0707 06:05:35.610430 2474 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 7 06:05:35.610570 kubelet[2474]: I0707 06:05:35.610507 2474 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 7 06:05:35.641492 kubelet[2474]: I0707 06:05:35.641438 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:05:35.641492 kubelet[2474]: I0707 06:05:35.641478 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 7 06:05:35.641492 kubelet[2474]: I0707 06:05:35.641499 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd8bbc50091b2a906fc60226b64748e0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cd8bbc50091b2a906fc60226b64748e0\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:05:35.641657 kubelet[2474]: I0707 06:05:35.641515 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:05:35.641657 kubelet[2474]: I0707 06:05:35.641539 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:05:35.641657 kubelet[2474]: I0707 06:05:35.641554 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:05:35.641657 kubelet[2474]: I0707 06:05:35.641568 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd8bbc50091b2a906fc60226b64748e0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cd8bbc50091b2a906fc60226b64748e0\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:05:35.641657 kubelet[2474]: I0707 06:05:35.641583 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd8bbc50091b2a906fc60226b64748e0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cd8bbc50091b2a906fc60226b64748e0\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:05:35.641769 kubelet[2474]: I0707 06:05:35.641597 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:05:35.870703 kubelet[2474]: E0707 06:05:35.870597 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:35.870703 kubelet[2474]: E0707 06:05:35.870619 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:35.870703 kubelet[2474]: E0707 06:05:35.870600 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:36.436225 kubelet[2474]: I0707 06:05:36.436179 2474 apiserver.go:52] "Watching apiserver" Jul 7 06:05:36.440390 kubelet[2474]: I0707 06:05:36.440363 2474 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 06:05:36.483401 kubelet[2474]: E0707 06:05:36.483366 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:36.488301 kubelet[2474]: E0707 06:05:36.488227 2474 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 7 06:05:36.488629 kubelet[2474]: E0707 06:05:36.488570 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:36.488778 kubelet[2474]: E0707 06:05:36.488747 2474 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 7 06:05:36.488900 kubelet[2474]: E0707 06:05:36.488878 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:36.509714 kubelet[2474]: I0707 06:05:36.509638 2474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.509622204 podStartE2EDuration="1.509622204s" podCreationTimestamp="2025-07-07 06:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:05:36.502372245 +0000 UTC m=+1.116869501" watchObservedRunningTime="2025-07-07 06:05:36.509622204 +0000 UTC m=+1.124119420" Jul 7 06:05:36.518116 kubelet[2474]: I0707 06:05:36.518068 2474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.518053341 podStartE2EDuration="1.518053341s" podCreationTimestamp="2025-07-07 06:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:05:36.517882727 +0000 UTC m=+1.132379983" watchObservedRunningTime="2025-07-07 06:05:36.518053341 +0000 UTC m=+1.132550597" Jul 7 06:05:36.518302 kubelet[2474]: I0707 06:05:36.518160 2474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.518156734 podStartE2EDuration="1.518156734s" podCreationTimestamp="2025-07-07 06:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:05:36.509915858 +0000 UTC m=+1.124413114" watchObservedRunningTime="2025-07-07 06:05:36.518156734 +0000 UTC m=+1.132653990" Jul 7 06:05:37.484811 kubelet[2474]: E0707 06:05:37.484730 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:37.484811 kubelet[2474]: E0707 06:05:37.484760 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:38.486413 kubelet[2474]: E0707 06:05:38.486379 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:40.827073 kubelet[2474]: I0707 06:05:40.827036 2474 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 06:05:40.827509 containerd[1447]: time="2025-07-07T06:05:40.827363555Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 06:05:40.828339 kubelet[2474]: I0707 06:05:40.827828 2474 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 06:05:41.817431 systemd[1]: Created slice kubepods-besteffort-podb728dc20_ec81_4c1b_963e_f9f084d6fa64.slice - libcontainer container kubepods-besteffort-podb728dc20_ec81_4c1b_963e_f9f084d6fa64.slice. Jul 7 06:05:41.934226 systemd[1]: Created slice kubepods-besteffort-podc276b1b4_272c_4e61_b28a_0e958cf834ab.slice - libcontainer container kubepods-besteffort-podc276b1b4_272c_4e61_b28a_0e958cf834ab.slice. Jul 7 06:05:41.980199 kubelet[2474]: I0707 06:05:41.980136 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b728dc20-ec81-4c1b-963e-f9f084d6fa64-kube-proxy\") pod \"kube-proxy-clbrl\" (UID: \"b728dc20-ec81-4c1b-963e-f9f084d6fa64\") " pod="kube-system/kube-proxy-clbrl" Jul 7 06:05:41.980199 kubelet[2474]: I0707 06:05:41.980189 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b728dc20-ec81-4c1b-963e-f9f084d6fa64-xtables-lock\") pod \"kube-proxy-clbrl\" (UID: \"b728dc20-ec81-4c1b-963e-f9f084d6fa64\") " pod="kube-system/kube-proxy-clbrl" Jul 7 06:05:41.980617 kubelet[2474]: I0707 06:05:41.980212 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5hx5\" (UniqueName: \"kubernetes.io/projected/b728dc20-ec81-4c1b-963e-f9f084d6fa64-kube-api-access-f5hx5\") pod \"kube-proxy-clbrl\" (UID: \"b728dc20-ec81-4c1b-963e-f9f084d6fa64\") " pod="kube-system/kube-proxy-clbrl" Jul 7 06:05:41.980617 kubelet[2474]: I0707 06:05:41.980233 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b728dc20-ec81-4c1b-963e-f9f084d6fa64-lib-modules\") pod \"kube-proxy-clbrl\" (UID: \"b728dc20-ec81-4c1b-963e-f9f084d6fa64\") " pod="kube-system/kube-proxy-clbrl" Jul 7 06:05:42.081069 kubelet[2474]: I0707 06:05:42.080939 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh9dl\" (UniqueName: \"kubernetes.io/projected/c276b1b4-272c-4e61-b28a-0e958cf834ab-kube-api-access-wh9dl\") pod \"tigera-operator-5bf8dfcb4-mmwcs\" (UID: \"c276b1b4-272c-4e61-b28a-0e958cf834ab\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-mmwcs" Jul 7 06:05:42.081180 kubelet[2474]: I0707 06:05:42.081088 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c276b1b4-272c-4e61-b28a-0e958cf834ab-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-mmwcs\" (UID: \"c276b1b4-272c-4e61-b28a-0e958cf834ab\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-mmwcs" Jul 7 06:05:42.130648 kubelet[2474]: E0707 06:05:42.130430 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:42.131575 containerd[1447]: time="2025-07-07T06:05:42.131538981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-clbrl,Uid:b728dc20-ec81-4c1b-963e-f9f084d6fa64,Namespace:kube-system,Attempt:0,}" Jul 7 06:05:42.149990 containerd[1447]: time="2025-07-07T06:05:42.149844213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:05:42.149990 containerd[1447]: time="2025-07-07T06:05:42.149893703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:05:42.149990 containerd[1447]: time="2025-07-07T06:05:42.149904786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:05:42.149990 containerd[1447]: time="2025-07-07T06:05:42.149988723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:05:42.175528 systemd[1]: Started cri-containerd-89664024963f40845aa6abea93179c8c2fdd47ecc23ba8b0dc9b2ddf55c78f29.scope - libcontainer container 89664024963f40845aa6abea93179c8c2fdd47ecc23ba8b0dc9b2ddf55c78f29. Jul 7 06:05:42.198005 containerd[1447]: time="2025-07-07T06:05:42.197963048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-clbrl,Uid:b728dc20-ec81-4c1b-963e-f9f084d6fa64,Namespace:kube-system,Attempt:0,} returns sandbox id \"89664024963f40845aa6abea93179c8c2fdd47ecc23ba8b0dc9b2ddf55c78f29\"" Jul 7 06:05:42.198992 kubelet[2474]: E0707 06:05:42.198763 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:42.200756 containerd[1447]: time="2025-07-07T06:05:42.200693579Z" level=info msg="CreateContainer within sandbox \"89664024963f40845aa6abea93179c8c2fdd47ecc23ba8b0dc9b2ddf55c78f29\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 06:05:42.212873 containerd[1447]: time="2025-07-07T06:05:42.212791632Z" level=info msg="CreateContainer within sandbox \"89664024963f40845aa6abea93179c8c2fdd47ecc23ba8b0dc9b2ddf55c78f29\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6b8701a6f45c647ec2b4d1f2eaa342c921931b3125e634e4e7432259f9ab031e\"" Jul 7 06:05:42.213360 containerd[1447]: time="2025-07-07T06:05:42.213313261Z" level=info msg="StartContainer for \"6b8701a6f45c647ec2b4d1f2eaa342c921931b3125e634e4e7432259f9ab031e\"" Jul 7 06:05:42.237244 containerd[1447]: time="2025-07-07T06:05:42.236902440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-mmwcs,Uid:c276b1b4-272c-4e61-b28a-0e958cf834ab,Namespace:tigera-operator,Attempt:0,}" Jul 7 06:05:42.242499 systemd[1]: Started cri-containerd-6b8701a6f45c647ec2b4d1f2eaa342c921931b3125e634e4e7432259f9ab031e.scope - libcontainer container 6b8701a6f45c647ec2b4d1f2eaa342c921931b3125e634e4e7432259f9ab031e. Jul 7 06:05:42.255108 containerd[1447]: time="2025-07-07T06:05:42.255006391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:05:42.255108 containerd[1447]: time="2025-07-07T06:05:42.255085127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:05:42.255280 containerd[1447]: time="2025-07-07T06:05:42.255100010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:05:42.255280 containerd[1447]: time="2025-07-07T06:05:42.255186868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:05:42.267235 containerd[1447]: time="2025-07-07T06:05:42.267147853Z" level=info msg="StartContainer for \"6b8701a6f45c647ec2b4d1f2eaa342c921931b3125e634e4e7432259f9ab031e\" returns successfully" Jul 7 06:05:42.277482 systemd[1]: Started cri-containerd-aae909fc41be7009e0ca6b60a554183ccd481caa49e181e20b16e2be95e72ec2.scope - libcontainer container aae909fc41be7009e0ca6b60a554183ccd481caa49e181e20b16e2be95e72ec2. Jul 7 06:05:42.307514 containerd[1447]: time="2025-07-07T06:05:42.307475856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-mmwcs,Uid:c276b1b4-272c-4e61-b28a-0e958cf834ab,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"aae909fc41be7009e0ca6b60a554183ccd481caa49e181e20b16e2be95e72ec2\"" Jul 7 06:05:42.309821 containerd[1447]: time="2025-07-07T06:05:42.309558492Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 06:05:42.493361 kubelet[2474]: E0707 06:05:42.492662 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:43.388148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2590106371.mount: Deactivated successfully. Jul 7 06:05:43.919463 containerd[1447]: time="2025-07-07T06:05:43.919394696Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:43.920041 containerd[1447]: time="2025-07-07T06:05:43.920015379Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 7 06:05:43.920704 containerd[1447]: time="2025-07-07T06:05:43.920668829Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:43.922896 containerd[1447]: time="2025-07-07T06:05:43.922871345Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:43.923635 containerd[1447]: time="2025-07-07T06:05:43.923552440Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.613962742s" Jul 7 06:05:43.923635 containerd[1447]: time="2025-07-07T06:05:43.923583127Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 7 06:05:43.925810 containerd[1447]: time="2025-07-07T06:05:43.925658178Z" level=info msg="CreateContainer within sandbox \"aae909fc41be7009e0ca6b60a554183ccd481caa49e181e20b16e2be95e72ec2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 06:05:43.935302 containerd[1447]: time="2025-07-07T06:05:43.935264322Z" level=info msg="CreateContainer within sandbox \"aae909fc41be7009e0ca6b60a554183ccd481caa49e181e20b16e2be95e72ec2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d9f5d0a76785aeb9d97d09a0a21431988c24285ce20396a5b3c5564659184e03\"" Jul 7 06:05:43.936101 containerd[1447]: time="2025-07-07T06:05:43.935827353Z" level=info msg="StartContainer for \"d9f5d0a76785aeb9d97d09a0a21431988c24285ce20396a5b3c5564659184e03\"" Jul 7 06:05:43.959499 systemd[1]: Started cri-containerd-d9f5d0a76785aeb9d97d09a0a21431988c24285ce20396a5b3c5564659184e03.scope - libcontainer container d9f5d0a76785aeb9d97d09a0a21431988c24285ce20396a5b3c5564659184e03. Jul 7 06:05:43.981116 containerd[1447]: time="2025-07-07T06:05:43.981077603Z" level=info msg="StartContainer for \"d9f5d0a76785aeb9d97d09a0a21431988c24285ce20396a5b3c5564659184e03\" returns successfully" Jul 7 06:05:44.505402 kubelet[2474]: I0707 06:05:44.505113 2474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-clbrl" podStartSLOduration=3.505096349 podStartE2EDuration="3.505096349s" podCreationTimestamp="2025-07-07 06:05:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:05:42.500630256 +0000 UTC m=+7.115127512" watchObservedRunningTime="2025-07-07 06:05:44.505096349 +0000 UTC m=+9.119593605" Jul 7 06:05:45.499836 kubelet[2474]: E0707 06:05:45.499801 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:45.541801 kubelet[2474]: I0707 06:05:45.541741 2474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-mmwcs" podStartSLOduration=2.925896229 podStartE2EDuration="4.541725673s" podCreationTimestamp="2025-07-07 06:05:41 +0000 UTC" firstStartedPulling="2025-07-07 06:05:42.308556442 +0000 UTC m=+6.923053698" lastFinishedPulling="2025-07-07 06:05:43.924385886 +0000 UTC m=+8.538883142" observedRunningTime="2025-07-07 06:05:44.505224213 +0000 UTC m=+9.119721469" watchObservedRunningTime="2025-07-07 06:05:45.541725673 +0000 UTC m=+10.156222929" Jul 7 06:05:45.986927 kubelet[2474]: E0707 06:05:45.986666 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:46.510122 kubelet[2474]: E0707 06:05:46.510090 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:48.202381 kubelet[2474]: E0707 06:05:48.202168 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:49.091446 update_engine[1423]: I20250707 06:05:49.091373 1423 update_attempter.cc:509] Updating boot flags... Jul 7 06:05:49.144914 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2863) Jul 7 06:05:49.206722 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2864) Jul 7 06:05:49.339851 sudo[1619]: pam_unix(sudo:session): session closed for user root Jul 7 06:05:49.342524 sshd[1616]: pam_unix(sshd:session): session closed for user core Jul 7 06:05:49.346412 systemd[1]: sshd@6-10.0.0.91:22-10.0.0.1:33218.service: Deactivated successfully. Jul 7 06:05:49.348033 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 06:05:49.348186 systemd[1]: session-7.scope: Consumed 6.680s CPU time, 152.3M memory peak, 0B memory swap peak. Jul 7 06:05:49.349834 systemd-logind[1421]: Session 7 logged out. Waiting for processes to exit. Jul 7 06:05:49.351983 systemd-logind[1421]: Removed session 7. Jul 7 06:05:54.719058 systemd[1]: Created slice kubepods-besteffort-podc360f6d9_d56f_4ce2_8eee_5dd735315425.slice - libcontainer container kubepods-besteffort-podc360f6d9_d56f_4ce2_8eee_5dd735315425.slice. Jul 7 06:05:54.772732 kubelet[2474]: I0707 06:05:54.772694 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c360f6d9-d56f-4ce2-8eee-5dd735315425-typha-certs\") pod \"calico-typha-5659c7c4f6-bvhfv\" (UID: \"c360f6d9-d56f-4ce2-8eee-5dd735315425\") " pod="calico-system/calico-typha-5659c7c4f6-bvhfv" Jul 7 06:05:54.773087 kubelet[2474]: I0707 06:05:54.772739 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c360f6d9-d56f-4ce2-8eee-5dd735315425-tigera-ca-bundle\") pod \"calico-typha-5659c7c4f6-bvhfv\" (UID: \"c360f6d9-d56f-4ce2-8eee-5dd735315425\") " pod="calico-system/calico-typha-5659c7c4f6-bvhfv" Jul 7 06:05:54.773087 kubelet[2474]: I0707 06:05:54.772759 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c24j8\" (UniqueName: \"kubernetes.io/projected/c360f6d9-d56f-4ce2-8eee-5dd735315425-kube-api-access-c24j8\") pod \"calico-typha-5659c7c4f6-bvhfv\" (UID: \"c360f6d9-d56f-4ce2-8eee-5dd735315425\") " pod="calico-system/calico-typha-5659c7c4f6-bvhfv" Jul 7 06:05:54.946233 systemd[1]: Created slice kubepods-besteffort-pod55454c80_c007_47f9_bdd3_e11d56a5fc41.slice - libcontainer container kubepods-besteffort-pod55454c80_c007_47f9_bdd3_e11d56a5fc41.slice. Jul 7 06:05:54.974412 kubelet[2474]: I0707 06:05:54.974287 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/55454c80-c007-47f9-bdd3-e11d56a5fc41-cni-log-dir\") pod \"calico-node-q6wfx\" (UID: \"55454c80-c007-47f9-bdd3-e11d56a5fc41\") " pod="calico-system/calico-node-q6wfx" Jul 7 06:05:54.974412 kubelet[2474]: I0707 06:05:54.974342 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/55454c80-c007-47f9-bdd3-e11d56a5fc41-policysync\") pod \"calico-node-q6wfx\" (UID: \"55454c80-c007-47f9-bdd3-e11d56a5fc41\") " pod="calico-system/calico-node-q6wfx" Jul 7 06:05:54.974412 kubelet[2474]: I0707 06:05:54.974363 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55454c80-c007-47f9-bdd3-e11d56a5fc41-tigera-ca-bundle\") pod \"calico-node-q6wfx\" (UID: \"55454c80-c007-47f9-bdd3-e11d56a5fc41\") " pod="calico-system/calico-node-q6wfx" Jul 7 06:05:54.974578 kubelet[2474]: I0707 06:05:54.974428 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj5jl\" (UniqueName: \"kubernetes.io/projected/55454c80-c007-47f9-bdd3-e11d56a5fc41-kube-api-access-cj5jl\") pod \"calico-node-q6wfx\" (UID: \"55454c80-c007-47f9-bdd3-e11d56a5fc41\") " pod="calico-system/calico-node-q6wfx" Jul 7 06:05:54.974578 kubelet[2474]: I0707 06:05:54.974469 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/55454c80-c007-47f9-bdd3-e11d56a5fc41-var-lib-calico\") pod \"calico-node-q6wfx\" (UID: \"55454c80-c007-47f9-bdd3-e11d56a5fc41\") " pod="calico-system/calico-node-q6wfx" Jul 7 06:05:54.974578 kubelet[2474]: I0707 06:05:54.974517 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/55454c80-c007-47f9-bdd3-e11d56a5fc41-var-run-calico\") pod \"calico-node-q6wfx\" (UID: \"55454c80-c007-47f9-bdd3-e11d56a5fc41\") " pod="calico-system/calico-node-q6wfx" Jul 7 06:05:54.974578 kubelet[2474]: I0707 06:05:54.974534 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/55454c80-c007-47f9-bdd3-e11d56a5fc41-cni-net-dir\") pod \"calico-node-q6wfx\" (UID: \"55454c80-c007-47f9-bdd3-e11d56a5fc41\") " pod="calico-system/calico-node-q6wfx" Jul 7 06:05:54.974670 kubelet[2474]: I0707 06:05:54.974590 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/55454c80-c007-47f9-bdd3-e11d56a5fc41-flexvol-driver-host\") pod \"calico-node-q6wfx\" (UID: \"55454c80-c007-47f9-bdd3-e11d56a5fc41\") " pod="calico-system/calico-node-q6wfx" Jul 7 06:05:54.974670 kubelet[2474]: I0707 06:05:54.974607 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/55454c80-c007-47f9-bdd3-e11d56a5fc41-node-certs\") pod \"calico-node-q6wfx\" (UID: \"55454c80-c007-47f9-bdd3-e11d56a5fc41\") " pod="calico-system/calico-node-q6wfx" Jul 7 06:05:54.974670 kubelet[2474]: I0707 06:05:54.974632 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55454c80-c007-47f9-bdd3-e11d56a5fc41-xtables-lock\") pod \"calico-node-q6wfx\" (UID: \"55454c80-c007-47f9-bdd3-e11d56a5fc41\") " pod="calico-system/calico-node-q6wfx" Jul 7 06:05:54.974670 kubelet[2474]: I0707 06:05:54.974648 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55454c80-c007-47f9-bdd3-e11d56a5fc41-lib-modules\") pod \"calico-node-q6wfx\" (UID: \"55454c80-c007-47f9-bdd3-e11d56a5fc41\") " pod="calico-system/calico-node-q6wfx" Jul 7 06:05:54.974670 kubelet[2474]: I0707 06:05:54.974663 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/55454c80-c007-47f9-bdd3-e11d56a5fc41-cni-bin-dir\") pod \"calico-node-q6wfx\" (UID: \"55454c80-c007-47f9-bdd3-e11d56a5fc41\") " pod="calico-system/calico-node-q6wfx" Jul 7 06:05:55.023794 kubelet[2474]: E0707 06:05:55.023762 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:55.024229 containerd[1447]: time="2025-07-07T06:05:55.024192252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5659c7c4f6-bvhfv,Uid:c360f6d9-d56f-4ce2-8eee-5dd735315425,Namespace:calico-system,Attempt:0,}" Jul 7 06:05:55.045035 containerd[1447]: time="2025-07-07T06:05:55.044801438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:05:55.045035 containerd[1447]: time="2025-07-07T06:05:55.044851843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:05:55.045035 containerd[1447]: time="2025-07-07T06:05:55.044863005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:05:55.045035 containerd[1447]: time="2025-07-07T06:05:55.044932812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:05:55.071943 systemd[1]: Started cri-containerd-e7a5e14b8ee136d8ce4e9f224057a496836cf0ce39f387fd9b4256c8ef838e8b.scope - libcontainer container e7a5e14b8ee136d8ce4e9f224057a496836cf0ce39f387fd9b4256c8ef838e8b. Jul 7 06:05:55.083370 kubelet[2474]: E0707 06:05:55.083332 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.083370 kubelet[2474]: W0707 06:05:55.083359 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.101281 kubelet[2474]: E0707 06:05:55.099720 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.102332 kubelet[2474]: E0707 06:05:55.102291 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.102332 kubelet[2474]: W0707 06:05:55.102314 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.102429 kubelet[2474]: E0707 06:05:55.102346 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.104610 kubelet[2474]: E0707 06:05:55.104556 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jghp4" podUID="632e2793-d8ae-43c1-a1dd-7d580aa97009" Jul 7 06:05:55.142580 containerd[1447]: time="2025-07-07T06:05:55.142542753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5659c7c4f6-bvhfv,Uid:c360f6d9-d56f-4ce2-8eee-5dd735315425,Namespace:calico-system,Attempt:0,} returns sandbox id \"e7a5e14b8ee136d8ce4e9f224057a496836cf0ce39f387fd9b4256c8ef838e8b\"" Jul 7 06:05:55.143309 kubelet[2474]: E0707 06:05:55.143279 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:55.144365 containerd[1447]: time="2025-07-07T06:05:55.144133165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 06:05:55.170274 kubelet[2474]: E0707 06:05:55.170227 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.170274 kubelet[2474]: W0707 06:05:55.170250 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.170274 kubelet[2474]: E0707 06:05:55.170268 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.170548 kubelet[2474]: E0707 06:05:55.170507 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.170548 kubelet[2474]: W0707 06:05:55.170516 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.170548 kubelet[2474]: E0707 06:05:55.170527 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.170809 kubelet[2474]: E0707 06:05:55.170744 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.170809 kubelet[2474]: W0707 06:05:55.170757 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.170809 kubelet[2474]: E0707 06:05:55.170766 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.172120 kubelet[2474]: E0707 06:05:55.170932 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.172120 kubelet[2474]: W0707 06:05:55.170944 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.172120 kubelet[2474]: E0707 06:05:55.170953 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.172120 kubelet[2474]: E0707 06:05:55.171113 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.172120 kubelet[2474]: W0707 06:05:55.171120 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.172120 kubelet[2474]: E0707 06:05:55.171128 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.172120 kubelet[2474]: E0707 06:05:55.171255 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.172120 kubelet[2474]: W0707 06:05:55.171261 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.172120 kubelet[2474]: E0707 06:05:55.171268 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.172120 kubelet[2474]: E0707 06:05:55.171433 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.172388 kubelet[2474]: W0707 06:05:55.171442 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.172388 kubelet[2474]: E0707 06:05:55.171450 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.172388 kubelet[2474]: E0707 06:05:55.171603 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.172388 kubelet[2474]: W0707 06:05:55.171612 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.172388 kubelet[2474]: E0707 06:05:55.171619 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.172388 kubelet[2474]: E0707 06:05:55.171783 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.172388 kubelet[2474]: W0707 06:05:55.171791 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.172388 kubelet[2474]: E0707 06:05:55.171799 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.172388 kubelet[2474]: E0707 06:05:55.171980 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.172388 kubelet[2474]: W0707 06:05:55.171991 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.172585 kubelet[2474]: E0707 06:05:55.171999 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.172585 kubelet[2474]: E0707 06:05:55.172185 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.172585 kubelet[2474]: W0707 06:05:55.172200 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.172585 kubelet[2474]: E0707 06:05:55.172209 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.172585 kubelet[2474]: E0707 06:05:55.172367 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.172585 kubelet[2474]: W0707 06:05:55.172375 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.172585 kubelet[2474]: E0707 06:05:55.172383 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.172724 kubelet[2474]: E0707 06:05:55.172620 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.172724 kubelet[2474]: W0707 06:05:55.172630 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.172724 kubelet[2474]: E0707 06:05:55.172639 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.172874 kubelet[2474]: E0707 06:05:55.172819 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.172874 kubelet[2474]: W0707 06:05:55.172853 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.172874 kubelet[2474]: E0707 06:05:55.172863 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.174716 kubelet[2474]: E0707 06:05:55.173043 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.174716 kubelet[2474]: W0707 06:05:55.173054 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.174716 kubelet[2474]: E0707 06:05:55.173062 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.174716 kubelet[2474]: E0707 06:05:55.173312 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.174716 kubelet[2474]: W0707 06:05:55.173329 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.174716 kubelet[2474]: E0707 06:05:55.173338 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.174716 kubelet[2474]: E0707 06:05:55.173494 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.174716 kubelet[2474]: W0707 06:05:55.173501 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.174716 kubelet[2474]: E0707 06:05:55.173508 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.174716 kubelet[2474]: E0707 06:05:55.173638 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.174974 kubelet[2474]: W0707 06:05:55.173645 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.174974 kubelet[2474]: E0707 06:05:55.173651 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.174974 kubelet[2474]: E0707 06:05:55.173808 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.174974 kubelet[2474]: W0707 06:05:55.173816 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.174974 kubelet[2474]: E0707 06:05:55.173823 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.174974 kubelet[2474]: E0707 06:05:55.173980 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.174974 kubelet[2474]: W0707 06:05:55.173988 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.174974 kubelet[2474]: E0707 06:05:55.173996 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.177073 kubelet[2474]: E0707 06:05:55.176341 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.177073 kubelet[2474]: W0707 06:05:55.176360 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.177073 kubelet[2474]: E0707 06:05:55.176374 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.177073 kubelet[2474]: I0707 06:05:55.176399 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/632e2793-d8ae-43c1-a1dd-7d580aa97009-kubelet-dir\") pod \"csi-node-driver-jghp4\" (UID: \"632e2793-d8ae-43c1-a1dd-7d580aa97009\") " pod="calico-system/csi-node-driver-jghp4" Jul 7 06:05:55.177073 kubelet[2474]: E0707 06:05:55.176618 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.177073 kubelet[2474]: W0707 06:05:55.176629 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.177073 kubelet[2474]: E0707 06:05:55.176648 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.177073 kubelet[2474]: I0707 06:05:55.176664 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/632e2793-d8ae-43c1-a1dd-7d580aa97009-registration-dir\") pod \"csi-node-driver-jghp4\" (UID: \"632e2793-d8ae-43c1-a1dd-7d580aa97009\") " pod="calico-system/csi-node-driver-jghp4" Jul 7 06:05:55.177073 kubelet[2474]: E0707 06:05:55.176886 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.177304 kubelet[2474]: W0707 06:05:55.176899 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.177304 kubelet[2474]: E0707 06:05:55.176917 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.177604 kubelet[2474]: E0707 06:05:55.177449 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.177604 kubelet[2474]: W0707 06:05:55.177465 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.177604 kubelet[2474]: E0707 06:05:55.177482 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.178054 kubelet[2474]: E0707 06:05:55.177701 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.178054 kubelet[2474]: W0707 06:05:55.177712 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.178054 kubelet[2474]: E0707 06:05:55.177728 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.178054 kubelet[2474]: I0707 06:05:55.177748 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/632e2793-d8ae-43c1-a1dd-7d580aa97009-socket-dir\") pod \"csi-node-driver-jghp4\" (UID: \"632e2793-d8ae-43c1-a1dd-7d580aa97009\") " pod="calico-system/csi-node-driver-jghp4" Jul 7 06:05:55.178054 kubelet[2474]: E0707 06:05:55.177916 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.178054 kubelet[2474]: W0707 06:05:55.177929 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.178054 kubelet[2474]: E0707 06:05:55.177945 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.178384 kubelet[2474]: E0707 06:05:55.178302 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.178384 kubelet[2474]: W0707 06:05:55.178372 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.178894 kubelet[2474]: E0707 06:05:55.178391 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.178894 kubelet[2474]: E0707 06:05:55.178702 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.178894 kubelet[2474]: W0707 06:05:55.178716 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.178894 kubelet[2474]: E0707 06:05:55.178740 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.178894 kubelet[2474]: I0707 06:05:55.178757 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/632e2793-d8ae-43c1-a1dd-7d580aa97009-varrun\") pod \"csi-node-driver-jghp4\" (UID: \"632e2793-d8ae-43c1-a1dd-7d580aa97009\") " pod="calico-system/csi-node-driver-jghp4" Jul 7 06:05:55.179216 kubelet[2474]: E0707 06:05:55.179154 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.179216 kubelet[2474]: W0707 06:05:55.179197 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.179484 kubelet[2474]: E0707 06:05:55.179390 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.179484 kubelet[2474]: I0707 06:05:55.179424 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blk9t\" (UniqueName: \"kubernetes.io/projected/632e2793-d8ae-43c1-a1dd-7d580aa97009-kube-api-access-blk9t\") pod \"csi-node-driver-jghp4\" (UID: \"632e2793-d8ae-43c1-a1dd-7d580aa97009\") " pod="calico-system/csi-node-driver-jghp4" Jul 7 06:05:55.179775 kubelet[2474]: E0707 06:05:55.179626 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.179775 kubelet[2474]: W0707 06:05:55.179643 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.179775 kubelet[2474]: E0707 06:05:55.179700 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.180163 kubelet[2474]: E0707 06:05:55.179967 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.180163 kubelet[2474]: W0707 06:05:55.179983 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.180163 kubelet[2474]: E0707 06:05:55.180019 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.180546 kubelet[2474]: E0707 06:05:55.180407 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.180546 kubelet[2474]: W0707 06:05:55.180424 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.180546 kubelet[2474]: E0707 06:05:55.180440 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.180906 kubelet[2474]: E0707 06:05:55.180720 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.180906 kubelet[2474]: W0707 06:05:55.180735 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.180906 kubelet[2474]: E0707 06:05:55.180745 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.181175 kubelet[2474]: E0707 06:05:55.181088 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.181175 kubelet[2474]: W0707 06:05:55.181161 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.181427 kubelet[2474]: E0707 06:05:55.181262 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.181648 kubelet[2474]: E0707 06:05:55.181570 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.181648 kubelet[2474]: W0707 06:05:55.181633 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.181814 kubelet[2474]: E0707 06:05:55.181764 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.249440 containerd[1447]: time="2025-07-07T06:05:55.249387891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q6wfx,Uid:55454c80-c007-47f9-bdd3-e11d56a5fc41,Namespace:calico-system,Attempt:0,}" Jul 7 06:05:55.273102 containerd[1447]: time="2025-07-07T06:05:55.272176312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:05:55.273102 containerd[1447]: time="2025-07-07T06:05:55.272227317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:05:55.273102 containerd[1447]: time="2025-07-07T06:05:55.272245239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:05:55.273102 containerd[1447]: time="2025-07-07T06:05:55.272361732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:05:55.280966 kubelet[2474]: E0707 06:05:55.280937 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.280966 kubelet[2474]: W0707 06:05:55.280960 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.281204 kubelet[2474]: E0707 06:05:55.280980 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.281299 kubelet[2474]: E0707 06:05:55.281283 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.281299 kubelet[2474]: W0707 06:05:55.281298 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.281539 kubelet[2474]: E0707 06:05:55.281311 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.281539 kubelet[2474]: E0707 06:05:55.281517 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.281539 kubelet[2474]: W0707 06:05:55.281526 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.281659 kubelet[2474]: E0707 06:05:55.281543 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.281786 kubelet[2474]: E0707 06:05:55.281771 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.281786 kubelet[2474]: W0707 06:05:55.281782 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.281839 kubelet[2474]: E0707 06:05:55.281794 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.282068 kubelet[2474]: E0707 06:05:55.282041 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.282068 kubelet[2474]: W0707 06:05:55.282066 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.282136 kubelet[2474]: E0707 06:05:55.282097 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.282399 kubelet[2474]: E0707 06:05:55.282357 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.282399 kubelet[2474]: W0707 06:05:55.282380 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.282399 kubelet[2474]: E0707 06:05:55.282396 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.282610 kubelet[2474]: E0707 06:05:55.282573 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.282610 kubelet[2474]: W0707 06:05:55.282591 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.282679 kubelet[2474]: E0707 06:05:55.282615 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.282819 kubelet[2474]: E0707 06:05:55.282802 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.282819 kubelet[2474]: W0707 06:05:55.282818 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.282873 kubelet[2474]: E0707 06:05:55.282855 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.283250 kubelet[2474]: E0707 06:05:55.283228 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.283250 kubelet[2474]: W0707 06:05:55.283243 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.283505 kubelet[2474]: E0707 06:05:55.283469 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.283505 kubelet[2474]: E0707 06:05:55.283485 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.283606 kubelet[2474]: W0707 06:05:55.283519 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.283606 kubelet[2474]: E0707 06:05:55.283548 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.284312 kubelet[2474]: E0707 06:05:55.284170 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.284393 kubelet[2474]: W0707 06:05:55.284361 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.284497 kubelet[2474]: E0707 06:05:55.284465 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.293945 kubelet[2474]: E0707 06:05:55.293905 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.293945 kubelet[2474]: W0707 06:05:55.293924 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.294136 kubelet[2474]: E0707 06:05:55.293989 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.295559 kubelet[2474]: E0707 06:05:55.294605 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.295559 kubelet[2474]: W0707 06:05:55.294623 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.295559 kubelet[2474]: E0707 06:05:55.294687 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.295559 kubelet[2474]: E0707 06:05:55.294857 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.295559 kubelet[2474]: W0707 06:05:55.294866 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.295559 kubelet[2474]: E0707 06:05:55.294890 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.295559 kubelet[2474]: E0707 06:05:55.295062 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.295559 kubelet[2474]: W0707 06:05:55.295072 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.295559 kubelet[2474]: E0707 06:05:55.295112 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.295559 kubelet[2474]: E0707 06:05:55.295239 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.295485 systemd[1]: Started cri-containerd-d73c6639a6a5cabdd9da12c3d26b4cfe1c73ca37eacca884f640ffcf6da86549.scope - libcontainer container d73c6639a6a5cabdd9da12c3d26b4cfe1c73ca37eacca884f640ffcf6da86549. Jul 7 06:05:55.295903 kubelet[2474]: W0707 06:05:55.295249 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.295903 kubelet[2474]: E0707 06:05:55.295272 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.295903 kubelet[2474]: E0707 06:05:55.295872 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.295903 kubelet[2474]: W0707 06:05:55.295882 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.295903 kubelet[2474]: E0707 06:05:55.295902 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.297225 kubelet[2474]: E0707 06:05:55.296147 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.297225 kubelet[2474]: W0707 06:05:55.296166 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.297225 kubelet[2474]: E0707 06:05:55.296189 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.297225 kubelet[2474]: E0707 06:05:55.296429 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.297225 kubelet[2474]: W0707 06:05:55.296438 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.297225 kubelet[2474]: E0707 06:05:55.296471 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.297225 kubelet[2474]: E0707 06:05:55.296642 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.297225 kubelet[2474]: W0707 06:05:55.296651 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.297225 kubelet[2474]: E0707 06:05:55.296691 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.297641 kubelet[2474]: E0707 06:05:55.297406 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.297641 kubelet[2474]: W0707 06:05:55.297420 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.297641 kubelet[2474]: E0707 06:05:55.297545 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.297744 kubelet[2474]: E0707 06:05:55.297724 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.297744 kubelet[2474]: W0707 06:05:55.297738 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.297954 kubelet[2474]: E0707 06:05:55.297858 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.298010 kubelet[2474]: E0707 06:05:55.297921 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.298132 kubelet[2474]: W0707 06:05:55.298107 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.299404 kubelet[2474]: E0707 06:05:55.299258 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.299404 kubelet[2474]: E0707 06:05:55.299405 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.299519 kubelet[2474]: W0707 06:05:55.299416 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.299519 kubelet[2474]: E0707 06:05:55.299448 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.299966 kubelet[2474]: E0707 06:05:55.299921 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.299966 kubelet[2474]: W0707 06:05:55.299941 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.299966 kubelet[2474]: E0707 06:05:55.299953 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.320063 kubelet[2474]: E0707 06:05:55.319972 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:55.320063 kubelet[2474]: W0707 06:05:55.319997 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:55.320063 kubelet[2474]: E0707 06:05:55.320016 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:55.333187 containerd[1447]: time="2025-07-07T06:05:55.332885268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q6wfx,Uid:55454c80-c007-47f9-bdd3-e11d56a5fc41,Namespace:calico-system,Attempt:0,} returns sandbox id \"d73c6639a6a5cabdd9da12c3d26b4cfe1c73ca37eacca884f640ffcf6da86549\"" Jul 7 06:05:56.152877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1039054182.mount: Deactivated successfully. Jul 7 06:05:56.643787 containerd[1447]: time="2025-07-07T06:05:56.643742684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:56.644236 containerd[1447]: time="2025-07-07T06:05:56.644203012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 7 06:05:56.645343 containerd[1447]: time="2025-07-07T06:05:56.645302565Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:56.647392 containerd[1447]: time="2025-07-07T06:05:56.647363218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:56.648038 containerd[1447]: time="2025-07-07T06:05:56.648002244Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.503818154s" Jul 7 06:05:56.648109 containerd[1447]: time="2025-07-07T06:05:56.648037768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 7 06:05:56.649142 containerd[1447]: time="2025-07-07T06:05:56.649065033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 06:05:56.662532 containerd[1447]: time="2025-07-07T06:05:56.662495619Z" level=info msg="CreateContainer within sandbox \"e7a5e14b8ee136d8ce4e9f224057a496836cf0ce39f387fd9b4256c8ef838e8b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 06:05:56.673604 containerd[1447]: time="2025-07-07T06:05:56.673559560Z" level=info msg="CreateContainer within sandbox \"e7a5e14b8ee136d8ce4e9f224057a496836cf0ce39f387fd9b4256c8ef838e8b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"306e414d2fc87c99574d6a5d2395a9507b898bf576258002b65f9c72f3d1d873\"" Jul 7 06:05:56.675052 containerd[1447]: time="2025-07-07T06:05:56.673992085Z" level=info msg="StartContainer for \"306e414d2fc87c99574d6a5d2395a9507b898bf576258002b65f9c72f3d1d873\"" Jul 7 06:05:56.698550 systemd[1]: Started cri-containerd-306e414d2fc87c99574d6a5d2395a9507b898bf576258002b65f9c72f3d1d873.scope - libcontainer container 306e414d2fc87c99574d6a5d2395a9507b898bf576258002b65f9c72f3d1d873. Jul 7 06:05:56.726988 containerd[1447]: time="2025-07-07T06:05:56.726165468Z" level=info msg="StartContainer for \"306e414d2fc87c99574d6a5d2395a9507b898bf576258002b65f9c72f3d1d873\" returns successfully" Jul 7 06:05:57.465447 kubelet[2474]: E0707 06:05:57.465373 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jghp4" podUID="632e2793-d8ae-43c1-a1dd-7d580aa97009" Jul 7 06:05:57.543472 kubelet[2474]: E0707 06:05:57.543441 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:57.553241 kubelet[2474]: I0707 06:05:57.553122 2474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5659c7c4f6-bvhfv" podStartSLOduration=2.047894585 podStartE2EDuration="3.553108005s" podCreationTimestamp="2025-07-07 06:05:54 +0000 UTC" firstStartedPulling="2025-07-07 06:05:55.143700798 +0000 UTC m=+19.758198054" lastFinishedPulling="2025-07-07 06:05:56.648914218 +0000 UTC m=+21.263411474" observedRunningTime="2025-07-07 06:05:57.552055341 +0000 UTC m=+22.166552597" watchObservedRunningTime="2025-07-07 06:05:57.553108005 +0000 UTC m=+22.167605261" Jul 7 06:05:57.590021 kubelet[2474]: E0707 06:05:57.589995 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.590021 kubelet[2474]: W0707 06:05:57.590016 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.590138 kubelet[2474]: E0707 06:05:57.590034 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.590369 kubelet[2474]: E0707 06:05:57.590352 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.590369 kubelet[2474]: W0707 06:05:57.590365 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.590369 kubelet[2474]: E0707 06:05:57.590375 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.590890 kubelet[2474]: E0707 06:05:57.590874 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.590890 kubelet[2474]: W0707 06:05:57.590889 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.590973 kubelet[2474]: E0707 06:05:57.590901 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.591259 kubelet[2474]: E0707 06:05:57.591245 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.591259 kubelet[2474]: W0707 06:05:57.591258 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.591384 kubelet[2474]: E0707 06:05:57.591276 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.591881 kubelet[2474]: E0707 06:05:57.591866 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.591881 kubelet[2474]: W0707 06:05:57.591879 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.592027 kubelet[2474]: E0707 06:05:57.591928 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.592231 kubelet[2474]: E0707 06:05:57.592219 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.592231 kubelet[2474]: W0707 06:05:57.592230 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.592336 kubelet[2474]: E0707 06:05:57.592240 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.592440 kubelet[2474]: E0707 06:05:57.592428 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.592440 kubelet[2474]: W0707 06:05:57.592439 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.592526 kubelet[2474]: E0707 06:05:57.592449 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.592644 kubelet[2474]: E0707 06:05:57.592626 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.592644 kubelet[2474]: W0707 06:05:57.592638 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.592701 kubelet[2474]: E0707 06:05:57.592647 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.593057 kubelet[2474]: E0707 06:05:57.593037 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.593057 kubelet[2474]: W0707 06:05:57.593057 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.593159 kubelet[2474]: E0707 06:05:57.593068 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.593263 kubelet[2474]: E0707 06:05:57.593251 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.593304 kubelet[2474]: W0707 06:05:57.593264 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.593304 kubelet[2474]: E0707 06:05:57.593283 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.593491 kubelet[2474]: E0707 06:05:57.593480 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.593491 kubelet[2474]: W0707 06:05:57.593491 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.593548 kubelet[2474]: E0707 06:05:57.593499 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.593755 kubelet[2474]: E0707 06:05:57.593739 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.593755 kubelet[2474]: W0707 06:05:57.593751 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.593834 kubelet[2474]: E0707 06:05:57.593761 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.594036 kubelet[2474]: E0707 06:05:57.594019 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.594036 kubelet[2474]: W0707 06:05:57.594036 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.594095 kubelet[2474]: E0707 06:05:57.594045 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.594234 kubelet[2474]: E0707 06:05:57.594223 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.594234 kubelet[2474]: W0707 06:05:57.594233 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.594307 kubelet[2474]: E0707 06:05:57.594241 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.594506 kubelet[2474]: E0707 06:05:57.594490 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.594506 kubelet[2474]: W0707 06:05:57.594503 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.594558 kubelet[2474]: E0707 06:05:57.594514 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.596909 kubelet[2474]: E0707 06:05:57.596780 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.596909 kubelet[2474]: W0707 06:05:57.596795 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.596909 kubelet[2474]: E0707 06:05:57.596810 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.597086 kubelet[2474]: E0707 06:05:57.597072 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.597144 kubelet[2474]: W0707 06:05:57.597132 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.597211 kubelet[2474]: E0707 06:05:57.597199 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.597545 kubelet[2474]: E0707 06:05:57.597511 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.597545 kubelet[2474]: W0707 06:05:57.597528 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.597545 kubelet[2474]: E0707 06:05:57.597544 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.597764 kubelet[2474]: E0707 06:05:57.597752 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.597804 kubelet[2474]: W0707 06:05:57.597763 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.597804 kubelet[2474]: E0707 06:05:57.597785 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.598009 kubelet[2474]: E0707 06:05:57.597996 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.598009 kubelet[2474]: W0707 06:05:57.598007 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.598081 kubelet[2474]: E0707 06:05:57.598021 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.598278 kubelet[2474]: E0707 06:05:57.598259 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.598334 kubelet[2474]: W0707 06:05:57.598278 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.598334 kubelet[2474]: E0707 06:05:57.598299 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.598734 kubelet[2474]: E0707 06:05:57.598589 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.598734 kubelet[2474]: W0707 06:05:57.598605 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.598734 kubelet[2474]: E0707 06:05:57.598621 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.598942 kubelet[2474]: E0707 06:05:57.598928 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.599074 kubelet[2474]: W0707 06:05:57.598993 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.599074 kubelet[2474]: E0707 06:05:57.599063 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.599207 kubelet[2474]: E0707 06:05:57.599194 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.599348 kubelet[2474]: W0707 06:05:57.599254 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.599348 kubelet[2474]: E0707 06:05:57.599329 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.599595 kubelet[2474]: E0707 06:05:57.599580 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.599656 kubelet[2474]: W0707 06:05:57.599645 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.599778 kubelet[2474]: E0707 06:05:57.599717 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.600046 kubelet[2474]: E0707 06:05:57.600031 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.600117 kubelet[2474]: W0707 06:05:57.600105 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.600234 kubelet[2474]: E0707 06:05:57.600175 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.600552 kubelet[2474]: E0707 06:05:57.600538 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.600621 kubelet[2474]: W0707 06:05:57.600603 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.600740 kubelet[2474]: E0707 06:05:57.600680 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.600972 kubelet[2474]: E0707 06:05:57.600959 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.601117 kubelet[2474]: W0707 06:05:57.601027 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.601117 kubelet[2474]: E0707 06:05:57.601059 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.601248 kubelet[2474]: E0707 06:05:57.601235 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.601350 kubelet[2474]: W0707 06:05:57.601306 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.601426 kubelet[2474]: E0707 06:05:57.601357 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.601635 kubelet[2474]: E0707 06:05:57.601622 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.601635 kubelet[2474]: W0707 06:05:57.601633 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.601695 kubelet[2474]: E0707 06:05:57.601650 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.601890 kubelet[2474]: E0707 06:05:57.601869 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.601890 kubelet[2474]: W0707 06:05:57.601880 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.602001 kubelet[2474]: E0707 06:05:57.601892 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.602153 kubelet[2474]: E0707 06:05:57.602139 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.602185 kubelet[2474]: W0707 06:05:57.602153 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.602185 kubelet[2474]: E0707 06:05:57.602164 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.602481 kubelet[2474]: E0707 06:05:57.602466 2474 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:05:57.602481 kubelet[2474]: W0707 06:05:57.602479 2474 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:05:57.602553 kubelet[2474]: E0707 06:05:57.602490 2474 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:05:57.689149 containerd[1447]: time="2025-07-07T06:05:57.689109821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:57.690336 containerd[1447]: time="2025-07-07T06:05:57.690283136Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 7 06:05:57.691442 containerd[1447]: time="2025-07-07T06:05:57.691412368Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:57.693175 containerd[1447]: time="2025-07-07T06:05:57.693127737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:05:57.694341 containerd[1447]: time="2025-07-07T06:05:57.694058309Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.044963472s" Jul 7 06:05:57.694341 containerd[1447]: time="2025-07-07T06:05:57.694090432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 7 06:05:57.696072 containerd[1447]: time="2025-07-07T06:05:57.696043145Z" level=info msg="CreateContainer within sandbox \"d73c6639a6a5cabdd9da12c3d26b4cfe1c73ca37eacca884f640ffcf6da86549\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 06:05:57.708913 containerd[1447]: time="2025-07-07T06:05:57.708880571Z" level=info msg="CreateContainer within sandbox \"d73c6639a6a5cabdd9da12c3d26b4cfe1c73ca37eacca884f640ffcf6da86549\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"43a7871ec593bee6255efe4fe3ef3f942b66c61291a0f6d99206dd3ac20137df\"" Jul 7 06:05:57.709954 containerd[1447]: time="2025-07-07T06:05:57.709469629Z" level=info msg="StartContainer for \"43a7871ec593bee6255efe4fe3ef3f942b66c61291a0f6d99206dd3ac20137df\"" Jul 7 06:05:57.752571 systemd[1]: Started cri-containerd-43a7871ec593bee6255efe4fe3ef3f942b66c61291a0f6d99206dd3ac20137df.scope - libcontainer container 43a7871ec593bee6255efe4fe3ef3f942b66c61291a0f6d99206dd3ac20137df. Jul 7 06:05:57.782872 containerd[1447]: time="2025-07-07T06:05:57.781212346Z" level=info msg="StartContainer for \"43a7871ec593bee6255efe4fe3ef3f942b66c61291a0f6d99206dd3ac20137df\" returns successfully" Jul 7 06:05:57.807862 systemd[1]: cri-containerd-43a7871ec593bee6255efe4fe3ef3f942b66c61291a0f6d99206dd3ac20137df.scope: Deactivated successfully. Jul 7 06:05:57.848404 containerd[1447]: time="2025-07-07T06:05:57.842844906Z" level=info msg="shim disconnected" id=43a7871ec593bee6255efe4fe3ef3f942b66c61291a0f6d99206dd3ac20137df namespace=k8s.io Jul 7 06:05:57.848404 containerd[1447]: time="2025-07-07T06:05:57.848392813Z" level=warning msg="cleaning up after shim disconnected" id=43a7871ec593bee6255efe4fe3ef3f942b66c61291a0f6d99206dd3ac20137df namespace=k8s.io Jul 7 06:05:57.848404 containerd[1447]: time="2025-07-07T06:05:57.848404654Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:05:57.879142 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43a7871ec593bee6255efe4fe3ef3f942b66c61291a0f6d99206dd3ac20137df-rootfs.mount: Deactivated successfully. Jul 7 06:05:58.548528 kubelet[2474]: I0707 06:05:58.548440 2474 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:05:58.548902 kubelet[2474]: E0707 06:05:58.548761 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:05:58.550328 containerd[1447]: time="2025-07-07T06:05:58.550256521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 06:05:59.466556 kubelet[2474]: E0707 06:05:59.466513 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jghp4" podUID="632e2793-d8ae-43c1-a1dd-7d580aa97009" Jul 7 06:06:00.166041 containerd[1447]: time="2025-07-07T06:06:00.166000608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:00.166544 containerd[1447]: time="2025-07-07T06:06:00.166507492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 7 06:06:00.167209 containerd[1447]: time="2025-07-07T06:06:00.167181991Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:00.169290 containerd[1447]: time="2025-07-07T06:06:00.169263771Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:00.170221 containerd[1447]: time="2025-07-07T06:06:00.170177770Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 1.619881366s" Jul 7 06:06:00.170221 containerd[1447]: time="2025-07-07T06:06:00.170211893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 7 06:06:00.176589 containerd[1447]: time="2025-07-07T06:06:00.176542562Z" level=info msg="CreateContainer within sandbox \"d73c6639a6a5cabdd9da12c3d26b4cfe1c73ca37eacca884f640ffcf6da86549\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 06:06:00.189008 containerd[1447]: time="2025-07-07T06:06:00.188958039Z" level=info msg="CreateContainer within sandbox \"d73c6639a6a5cabdd9da12c3d26b4cfe1c73ca37eacca884f640ffcf6da86549\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f9ae41415d1a952bed326afc4b3d6dde1b3615f05928076dd22524f2ca4f20a0\"" Jul 7 06:06:00.189429 containerd[1447]: time="2025-07-07T06:06:00.189401517Z" level=info msg="StartContainer for \"f9ae41415d1a952bed326afc4b3d6dde1b3615f05928076dd22524f2ca4f20a0\"" Jul 7 06:06:00.219485 systemd[1]: Started cri-containerd-f9ae41415d1a952bed326afc4b3d6dde1b3615f05928076dd22524f2ca4f20a0.scope - libcontainer container f9ae41415d1a952bed326afc4b3d6dde1b3615f05928076dd22524f2ca4f20a0. Jul 7 06:06:00.248336 containerd[1447]: time="2025-07-07T06:06:00.248274462Z" level=info msg="StartContainer for \"f9ae41415d1a952bed326afc4b3d6dde1b3615f05928076dd22524f2ca4f20a0\" returns successfully" Jul 7 06:06:00.821052 systemd[1]: cri-containerd-f9ae41415d1a952bed326afc4b3d6dde1b3615f05928076dd22524f2ca4f20a0.scope: Deactivated successfully. Jul 7 06:06:00.843931 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9ae41415d1a952bed326afc4b3d6dde1b3615f05928076dd22524f2ca4f20a0-rootfs.mount: Deactivated successfully. Jul 7 06:06:00.877212 kubelet[2474]: I0707 06:06:00.877169 2474 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 7 06:06:00.955521 containerd[1447]: time="2025-07-07T06:06:00.955418456Z" level=info msg="shim disconnected" id=f9ae41415d1a952bed326afc4b3d6dde1b3615f05928076dd22524f2ca4f20a0 namespace=k8s.io Jul 7 06:06:00.955521 containerd[1447]: time="2025-07-07T06:06:00.955485462Z" level=warning msg="cleaning up after shim disconnected" id=f9ae41415d1a952bed326afc4b3d6dde1b3615f05928076dd22524f2ca4f20a0 namespace=k8s.io Jul 7 06:06:00.955521 containerd[1447]: time="2025-07-07T06:06:00.955495423Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 06:06:00.998615 systemd[1]: Created slice kubepods-burstable-pod8ef96388_886e_494c_b484_12fab2731020.slice - libcontainer container kubepods-burstable-pod8ef96388_886e_494c_b484_12fab2731020.slice. Jul 7 06:06:01.006065 systemd[1]: Created slice kubepods-besteffort-pod532a5639_1429_4a85_8fbb_b79c8b04dfd3.slice - libcontainer container kubepods-besteffort-pod532a5639_1429_4a85_8fbb_b79c8b04dfd3.slice. Jul 7 06:06:01.016112 systemd[1]: Created slice kubepods-besteffort-pod66fd4b0e_a42b_41a5_a8e7_b55cecdc3007.slice - libcontainer container kubepods-besteffort-pod66fd4b0e_a42b_41a5_a8e7_b55cecdc3007.slice. Jul 7 06:06:01.023933 systemd[1]: Created slice kubepods-burstable-podcac05d19_02ca_4bd4_9a83_2d4df21aa5b9.slice - libcontainer container kubepods-burstable-podcac05d19_02ca_4bd4_9a83_2d4df21aa5b9.slice. Jul 7 06:06:01.031452 systemd[1]: Created slice kubepods-besteffort-podcc511929_66e0_46d7_a11d_745edf05836f.slice - libcontainer container kubepods-besteffort-podcc511929_66e0_46d7_a11d_745edf05836f.slice. Jul 7 06:06:01.036635 systemd[1]: Created slice kubepods-besteffort-pod18c7c67e_01e0_40ea_8d99_ba460eb1fde4.slice - libcontainer container kubepods-besteffort-pod18c7c67e_01e0_40ea_8d99_ba460eb1fde4.slice. Jul 7 06:06:01.043076 systemd[1]: Created slice kubepods-besteffort-pod528c9db9_c113_4e11_bd08_112e227a85e1.slice - libcontainer container kubepods-besteffort-pod528c9db9_c113_4e11_bd08_112e227a85e1.slice. Jul 7 06:06:01.171499 kubelet[2474]: I0707 06:06:01.171372 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvbwj\" (UniqueName: \"kubernetes.io/projected/cac05d19-02ca-4bd4-9a83-2d4df21aa5b9-kube-api-access-zvbwj\") pod \"coredns-7c65d6cfc9-4gfqc\" (UID: \"cac05d19-02ca-4bd4-9a83-2d4df21aa5b9\") " pod="kube-system/coredns-7c65d6cfc9-4gfqc" Jul 7 06:06:01.171499 kubelet[2474]: I0707 06:06:01.171428 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/66fd4b0e-a42b-41a5-a8e7-b55cecdc3007-goldmane-key-pair\") pod \"goldmane-58fd7646b9-6bhqv\" (UID: \"66fd4b0e-a42b-41a5-a8e7-b55cecdc3007\") " pod="calico-system/goldmane-58fd7646b9-6bhqv" Jul 7 06:06:01.171499 kubelet[2474]: I0707 06:06:01.171453 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npztn\" (UniqueName: \"kubernetes.io/projected/18c7c67e-01e0-40ea-8d99-ba460eb1fde4-kube-api-access-npztn\") pod \"calico-kube-controllers-7fd865784c-rm28g\" (UID: \"18c7c67e-01e0-40ea-8d99-ba460eb1fde4\") " pod="calico-system/calico-kube-controllers-7fd865784c-rm28g" Jul 7 06:06:01.171499 kubelet[2474]: I0707 06:06:01.171474 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6lnc\" (UniqueName: \"kubernetes.io/projected/532a5639-1429-4a85-8fbb-b79c8b04dfd3-kube-api-access-r6lnc\") pod \"calico-apiserver-56bdbd79df-xdcfc\" (UID: \"532a5639-1429-4a85-8fbb-b79c8b04dfd3\") " pod="calico-apiserver/calico-apiserver-56bdbd79df-xdcfc" Jul 7 06:06:01.171499 kubelet[2474]: I0707 06:06:01.171491 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ef96388-886e-494c-b484-12fab2731020-config-volume\") pod \"coredns-7c65d6cfc9-lczkp\" (UID: \"8ef96388-886e-494c-b484-12fab2731020\") " pod="kube-system/coredns-7c65d6cfc9-lczkp" Jul 7 06:06:01.171720 kubelet[2474]: I0707 06:06:01.171509 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66fd4b0e-a42b-41a5-a8e7-b55cecdc3007-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-6bhqv\" (UID: \"66fd4b0e-a42b-41a5-a8e7-b55cecdc3007\") " pod="calico-system/goldmane-58fd7646b9-6bhqv" Jul 7 06:06:01.171720 kubelet[2474]: I0707 06:06:01.171525 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkj6g\" (UniqueName: \"kubernetes.io/projected/66fd4b0e-a42b-41a5-a8e7-b55cecdc3007-kube-api-access-mkj6g\") pod \"goldmane-58fd7646b9-6bhqv\" (UID: \"66fd4b0e-a42b-41a5-a8e7-b55cecdc3007\") " pod="calico-system/goldmane-58fd7646b9-6bhqv" Jul 7 06:06:01.171720 kubelet[2474]: I0707 06:06:01.171603 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18c7c67e-01e0-40ea-8d99-ba460eb1fde4-tigera-ca-bundle\") pod \"calico-kube-controllers-7fd865784c-rm28g\" (UID: \"18c7c67e-01e0-40ea-8d99-ba460eb1fde4\") " pod="calico-system/calico-kube-controllers-7fd865784c-rm28g" Jul 7 06:06:01.171720 kubelet[2474]: I0707 06:06:01.171626 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc511929-66e0-46d7-a11d-745edf05836f-whisker-ca-bundle\") pod \"whisker-656c4558cd-j6wss\" (UID: \"cc511929-66e0-46d7-a11d-745edf05836f\") " pod="calico-system/whisker-656c4558cd-j6wss" Jul 7 06:06:01.171720 kubelet[2474]: I0707 06:06:01.171645 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/528c9db9-c113-4e11-bd08-112e227a85e1-calico-apiserver-certs\") pod \"calico-apiserver-56bdbd79df-bkhbd\" (UID: \"528c9db9-c113-4e11-bd08-112e227a85e1\") " pod="calico-apiserver/calico-apiserver-56bdbd79df-bkhbd" Jul 7 06:06:01.171833 kubelet[2474]: I0707 06:06:01.171661 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcwkr\" (UniqueName: \"kubernetes.io/projected/528c9db9-c113-4e11-bd08-112e227a85e1-kube-api-access-pcwkr\") pod \"calico-apiserver-56bdbd79df-bkhbd\" (UID: \"528c9db9-c113-4e11-bd08-112e227a85e1\") " pod="calico-apiserver/calico-apiserver-56bdbd79df-bkhbd" Jul 7 06:06:01.171833 kubelet[2474]: I0707 06:06:01.171680 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cac05d19-02ca-4bd4-9a83-2d4df21aa5b9-config-volume\") pod \"coredns-7c65d6cfc9-4gfqc\" (UID: \"cac05d19-02ca-4bd4-9a83-2d4df21aa5b9\") " pod="kube-system/coredns-7c65d6cfc9-4gfqc" Jul 7 06:06:01.171833 kubelet[2474]: I0707 06:06:01.171697 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j94kb\" (UniqueName: \"kubernetes.io/projected/cc511929-66e0-46d7-a11d-745edf05836f-kube-api-access-j94kb\") pod \"whisker-656c4558cd-j6wss\" (UID: \"cc511929-66e0-46d7-a11d-745edf05836f\") " pod="calico-system/whisker-656c4558cd-j6wss" Jul 7 06:06:01.171833 kubelet[2474]: I0707 06:06:01.171713 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/532a5639-1429-4a85-8fbb-b79c8b04dfd3-calico-apiserver-certs\") pod \"calico-apiserver-56bdbd79df-xdcfc\" (UID: \"532a5639-1429-4a85-8fbb-b79c8b04dfd3\") " pod="calico-apiserver/calico-apiserver-56bdbd79df-xdcfc" Jul 7 06:06:01.171833 kubelet[2474]: I0707 06:06:01.171728 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/66fd4b0e-a42b-41a5-a8e7-b55cecdc3007-config\") pod \"goldmane-58fd7646b9-6bhqv\" (UID: \"66fd4b0e-a42b-41a5-a8e7-b55cecdc3007\") " pod="calico-system/goldmane-58fd7646b9-6bhqv" Jul 7 06:06:01.171938 kubelet[2474]: I0707 06:06:01.171746 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cc511929-66e0-46d7-a11d-745edf05836f-whisker-backend-key-pair\") pod \"whisker-656c4558cd-j6wss\" (UID: \"cc511929-66e0-46d7-a11d-745edf05836f\") " pod="calico-system/whisker-656c4558cd-j6wss" Jul 7 06:06:01.171938 kubelet[2474]: I0707 06:06:01.171771 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrzvd\" (UniqueName: \"kubernetes.io/projected/8ef96388-886e-494c-b484-12fab2731020-kube-api-access-rrzvd\") pod \"coredns-7c65d6cfc9-lczkp\" (UID: \"8ef96388-886e-494c-b484-12fab2731020\") " pod="kube-system/coredns-7c65d6cfc9-lczkp" Jul 7 06:06:01.303435 kubelet[2474]: E0707 06:06:01.303381 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:06:01.305236 containerd[1447]: time="2025-07-07T06:06:01.303948416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lczkp,Uid:8ef96388-886e-494c-b484-12fab2731020,Namespace:kube-system,Attempt:0,}" Jul 7 06:06:01.312995 containerd[1447]: time="2025-07-07T06:06:01.312948045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56bdbd79df-xdcfc,Uid:532a5639-1429-4a85-8fbb-b79c8b04dfd3,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:06:01.319127 containerd[1447]: time="2025-07-07T06:06:01.319083675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-6bhqv,Uid:66fd4b0e-a42b-41a5-a8e7-b55cecdc3007,Namespace:calico-system,Attempt:0,}" Jul 7 06:06:01.329406 kubelet[2474]: E0707 06:06:01.329311 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:06:01.343115 containerd[1447]: time="2025-07-07T06:06:01.334981798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-656c4558cd-j6wss,Uid:cc511929-66e0-46d7-a11d-745edf05836f,Namespace:calico-system,Attempt:0,}" Jul 7 06:06:01.343115 containerd[1447]: time="2025-07-07T06:06:01.341977860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fd865784c-rm28g,Uid:18c7c67e-01e0-40ea-8d99-ba460eb1fde4,Namespace:calico-system,Attempt:0,}" Jul 7 06:06:01.350851 containerd[1447]: time="2025-07-07T06:06:01.346566762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56bdbd79df-bkhbd,Uid:528c9db9-c113-4e11-bd08-112e227a85e1,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:06:01.358308 containerd[1447]: time="2025-07-07T06:06:01.355953423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4gfqc,Uid:cac05d19-02ca-4bd4-9a83-2d4df21aa5b9,Namespace:kube-system,Attempt:0,}" Jul 7 06:06:01.490800 systemd[1]: Created slice kubepods-besteffort-pod632e2793_d8ae_43c1_a1dd_7d580aa97009.slice - libcontainer container kubepods-besteffort-pod632e2793_d8ae_43c1_a1dd_7d580aa97009.slice. Jul 7 06:06:01.495030 containerd[1447]: time="2025-07-07T06:06:01.494766535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jghp4,Uid:632e2793-d8ae-43c1-a1dd-7d580aa97009,Namespace:calico-system,Attempt:0,}" Jul 7 06:06:01.560339 containerd[1447]: time="2025-07-07T06:06:01.560059688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 06:06:01.798794 containerd[1447]: time="2025-07-07T06:06:01.798649262Z" level=error msg="Failed to destroy network for sandbox \"431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:01.799355 containerd[1447]: time="2025-07-07T06:06:01.799265193Z" level=error msg="encountered an error cleaning up failed sandbox \"431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:01.799426 containerd[1447]: time="2025-07-07T06:06:01.799348560Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56bdbd79df-bkhbd,Uid:528c9db9-c113-4e11-bd08-112e227a85e1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:01.803539 containerd[1447]: time="2025-07-07T06:06:01.802937259Z" level=error msg="Failed to destroy network for sandbox \"5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:01.803539 containerd[1447]: time="2025-07-07T06:06:01.802955780Z" level=error msg="Failed to destroy network for sandbox \"f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:01.803676 kubelet[2474]: E0707 06:06:01.800918 2474 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:01.803676 kubelet[2474]: E0707 06:06:01.802672 2474 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56bdbd79df-bkhbd" Jul 7 06:06:01.803676 kubelet[2474]: E0707 06:06:01.802766 2474 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56bdbd79df-bkhbd" Jul 7 06:06:01.803908 kubelet[2474]: E0707 06:06:01.802817 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-56bdbd79df-bkhbd_calico-apiserver(528c9db9-c113-4e11-bd08-112e227a85e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-56bdbd79df-bkhbd_calico-apiserver(528c9db9-c113-4e11-bd08-112e227a85e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56bdbd79df-bkhbd" podUID="528c9db9-c113-4e11-bd08-112e227a85e1" Jul 7 06:06:01.804465 containerd[1447]: time="2025-07-07T06:06:01.804434663Z" level=error msg="Failed to destroy network for sandbox \"3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:01.805266 containerd[1447]: time="2025-07-07T06:06:01.805197967Z" level=error msg="Failed to destroy network for sandbox \"71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:01.805665 containerd[1447]: time="2025-07-07T06:06:01.805599200Z" level=error msg="encountered an error cleaning up failed sandbox \"71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:01.805665 containerd[1447]: time="2025-07-07T06:06:01.805628163Z" level=error msg="encountered an error cleaning up failed sandbox \"f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:01.805787 containerd[1447]: time="2025-07-07T06:06:01.805685168Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-6bhqv,Uid:66fd4b0e-a42b-41a5-a8e7-b55cecdc3007,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:01.805787 containerd[1447]: time="2025-07-07T06:06:01.805658725Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-656c4558cd-j6wss,Uid:cc511929-66e0-46d7-a11d-745edf05836f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:01.806159 containerd[1447]: time="2025-07-07T06:06:01.806021076Z" level=error msg="encountered an error cleaning up failed sandbox \"3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:01.806223 kubelet[2474]: E0707 06:06:01.805883 2474 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:01.806223 kubelet[2474]: E0707 06:06:01.805934 2474 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-656c4558cd-j6wss" Jul 7 06:06:01.806223 kubelet[2474]: E0707 06:06:01.805951 2474 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-656c4558cd-j6wss" Jul 7 06:06:01.806223 kubelet[2474]: E0707 06:06:01.805883 2474 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:01.806440 kubelet[2474]: E0707 06:06:01.805984 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-656c4558cd-j6wss_calico-system(cc511929-66e0-46d7-a11d-745edf05836f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-656c4558cd-j6wss_calico-system(cc511929-66e0-46d7-a11d-745edf05836f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-656c4558cd-j6wss" podUID="cc511929-66e0-46d7-a11d-745edf05836f" Jul 7 06:06:01.806440 kubelet[2474]: E0707 06:06:01.806013 2474 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-6bhqv" Jul 7 06:06:01.806440 kubelet[2474]: E0707 06:06:01.806032 2474 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-6bhqv" Jul 7 06:06:01.806533 containerd[1447]: time="2025-07-07T06:06:01.806072960Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4gfqc,Uid:cac05d19-02ca-4bd4-9a83-2d4df21aa5b9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:01.806561 kubelet[2474]: E0707 06:06:01.806069 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-6bhqv_calico-system(66fd4b0e-a42b-41a5-a8e7-b55cecdc3007)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-6bhqv_calico-system(66fd4b0e-a42b-41a5-a8e7-b55cecdc3007)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-6bhqv" podUID="66fd4b0e-a42b-41a5-a8e7-b55cecdc3007" Jul 7 06:06:01.807084 kubelet[2474]: E0707 06:06:01.806757 2474 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:01.807084 kubelet[2474]: E0707 06:06:01.806833 2474 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-4gfqc" Jul 7 06:06:01.807084 kubelet[2474]: E0707 06:06:01.806938 2474 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-4gfqc" Jul 7 06:06:01.807299 kubelet[2474]: E0707 06:06:01.807001 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-4gfqc_kube-system(cac05d19-02ca-4bd4-9a83-2d4df21aa5b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-4gfqc_kube-system(cac05d19-02ca-4bd4-9a83-2d4df21aa5b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-4gfqc" podUID="cac05d19-02ca-4bd4-9a83-2d4df21aa5b9" Jul 7 06:06:01.807417 containerd[1447]: time="2025-07-07T06:06:01.806900069Z" level=error msg="encountered an error cleaning up failed sandbox \"5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:01.807417 containerd[1447]: time="2025-07-07T06:06:01.807208134Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fd865784c-rm28g,Uid:18c7c67e-01e0-40ea-8d99-ba460eb1fde4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:01.807739 containerd[1447]: time="2025-07-07T06:06:01.807712736Z" level=error msg="Failed to destroy network for sandbox \"d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:01.808039 kubelet[2474]: E0707 06:06:01.807994 2474 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:01.808039 kubelet[2474]: E0707 06:06:01.808029 2474 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fd865784c-rm28g" Jul 7 06:06:01.808752 kubelet[2474]: E0707 06:06:01.808045 2474 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fd865784c-rm28g" Jul 7 06:06:01.808752 kubelet[2474]: E0707 06:06:01.808074 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7fd865784c-rm28g_calico-system(18c7c67e-01e0-40ea-8d99-ba460eb1fde4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7fd865784c-rm28g_calico-system(18c7c67e-01e0-40ea-8d99-ba460eb1fde4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7fd865784c-rm28g" podUID="18c7c67e-01e0-40ea-8d99-ba460eb1fde4" Jul 7 06:06:01.808891 containerd[1447]: time="2025-07-07T06:06:01.808666536Z" level=error msg="encountered an error cleaning up failed sandbox \"d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:01.808983 containerd[1447]: time="2025-07-07T06:06:01.808960720Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lczkp,Uid:8ef96388-886e-494c-b484-12fab2731020,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:01.809393 kubelet[2474]: E0707 06:06:01.809142 2474 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:01.809393 kubelet[2474]: E0707 06:06:01.809407 2474 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-lczkp" Jul 7 06:06:01.809393 kubelet[2474]: E0707 06:06:01.809427 2474 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-lczkp" Jul 7 06:06:01.809655 kubelet[2474]: E0707 06:06:01.809461 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-lczkp_kube-system(8ef96388-886e-494c-b484-12fab2731020)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-lczkp_kube-system(8ef96388-886e-494c-b484-12fab2731020)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-lczkp" podUID="8ef96388-886e-494c-b484-12fab2731020" Jul 7 06:06:01.809803 containerd[1447]: time="2025-07-07T06:06:01.809498085Z" level=error msg="Failed to destroy network for sandbox \"c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:01.809803 containerd[1447]: time="2025-07-07T06:06:01.809759747Z" level=error msg="encountered an error cleaning up failed sandbox \"c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:01.809803 containerd[1447]: time="2025-07-07T06:06:01.809794349Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jghp4,Uid:632e2793-d8ae-43c1-a1dd-7d580aa97009,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:01.810613 kubelet[2474]: E0707 06:06:01.809911 2474 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:01.810613 kubelet[2474]: E0707 06:06:01.809947 2474 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jghp4" Jul 7 06:06:01.810613 kubelet[2474]: E0707 06:06:01.809972 2474 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jghp4" Jul 7 06:06:01.810738 kubelet[2474]: E0707 06:06:01.810008 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jghp4_calico-system(632e2793-d8ae-43c1-a1dd-7d580aa97009)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jghp4_calico-system(632e2793-d8ae-43c1-a1dd-7d580aa97009)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jghp4" podUID="632e2793-d8ae-43c1-a1dd-7d580aa97009" Jul 7 06:06:01.812617 containerd[1447]: time="2025-07-07T06:06:01.812588142Z" level=error msg="Failed to destroy network for sandbox \"c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:01.813084 containerd[1447]: time="2025-07-07T06:06:01.813032179Z" level=error msg="encountered an error cleaning up failed sandbox \"c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:01.813195 containerd[1447]: time="2025-07-07T06:06:01.813174711Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56bdbd79df-xdcfc,Uid:532a5639-1429-4a85-8fbb-b79c8b04dfd3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:01.813662 kubelet[2474]: E0707 06:06:01.813542 2474 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:01.813662 kubelet[2474]: E0707 06:06:01.813575 2474 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56bdbd79df-xdcfc" Jul 7 06:06:01.813662 kubelet[2474]: E0707 06:06:01.813589 2474 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56bdbd79df-xdcfc" Jul 7 06:06:01.813772 kubelet[2474]: E0707 06:06:01.813615 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-56bdbd79df-xdcfc_calico-apiserver(532a5639-1429-4a85-8fbb-b79c8b04dfd3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-56bdbd79df-xdcfc_calico-apiserver(532a5639-1429-4a85-8fbb-b79c8b04dfd3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56bdbd79df-xdcfc" podUID="532a5639-1429-4a85-8fbb-b79c8b04dfd3" Jul 7 06:06:02.563060 kubelet[2474]: I0707 06:06:02.563011 2474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" Jul 7 06:06:02.564646 kubelet[2474]: I0707 06:06:02.564446 2474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" Jul 7 06:06:02.565500 containerd[1447]: time="2025-07-07T06:06:02.564929142Z" level=info msg="StopPodSandbox for \"c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886\"" Jul 7 06:06:02.565500 containerd[1447]: time="2025-07-07T06:06:02.565092475Z" level=info msg="Ensure that sandbox c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886 in task-service has been cleanup successfully" Jul 7 06:06:02.570235 kubelet[2474]: I0707 06:06:02.568928 2474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" Jul 7 06:06:02.570346 containerd[1447]: time="2025-07-07T06:06:02.568988427Z" level=info msg="StopPodSandbox for \"5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40\"" Jul 7 06:06:02.570346 containerd[1447]: time="2025-07-07T06:06:02.569769689Z" level=info msg="Ensure that sandbox 5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40 in task-service has been cleanup successfully" Jul 7 06:06:02.570346 containerd[1447]: time="2025-07-07T06:06:02.569306092Z" level=info msg="StopPodSandbox for \"f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc\"" Jul 7 06:06:02.570428 containerd[1447]: time="2025-07-07T06:06:02.570346695Z" level=info msg="Ensure that sandbox f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc in task-service has been cleanup successfully" Jul 7 06:06:02.571174 kubelet[2474]: I0707 06:06:02.571138 2474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" Jul 7 06:06:02.571800 containerd[1447]: time="2025-07-07T06:06:02.571752408Z" level=info msg="StopPodSandbox for \"c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0\"" Jul 7 06:06:02.572061 containerd[1447]: time="2025-07-07T06:06:02.572028910Z" level=info msg="Ensure that sandbox c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0 in task-service has been cleanup successfully" Jul 7 06:06:02.576830 kubelet[2474]: I0707 06:06:02.576469 2474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" Jul 7 06:06:02.578701 containerd[1447]: time="2025-07-07T06:06:02.578660520Z" level=info msg="StopPodSandbox for \"d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb\"" Jul 7 06:06:02.578870 kubelet[2474]: I0707 06:06:02.578830 2474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" Jul 7 06:06:02.579095 containerd[1447]: time="2025-07-07T06:06:02.579068753Z" level=info msg="Ensure that sandbox d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb in task-service has been cleanup successfully" Jul 7 06:06:02.580250 containerd[1447]: time="2025-07-07T06:06:02.580216924Z" level=info msg="StopPodSandbox for \"3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4\"" Jul 7 06:06:02.580439 containerd[1447]: time="2025-07-07T06:06:02.580414220Z" level=info msg="Ensure that sandbox 3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4 in task-service has been cleanup successfully" Jul 7 06:06:02.586087 kubelet[2474]: I0707 06:06:02.583895 2474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" Jul 7 06:06:02.586171 containerd[1447]: time="2025-07-07T06:06:02.584390938Z" level=info msg="StopPodSandbox for \"71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163\"" Jul 7 06:06:02.586171 containerd[1447]: time="2025-07-07T06:06:02.584558711Z" level=info msg="Ensure that sandbox 71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163 in task-service has been cleanup successfully" Jul 7 06:06:02.590896 kubelet[2474]: I0707 06:06:02.590858 2474 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" Jul 7 06:06:02.593186 containerd[1447]: time="2025-07-07T06:06:02.593147838Z" level=info msg="StopPodSandbox for \"431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37\"" Jul 7 06:06:02.593401 containerd[1447]: time="2025-07-07T06:06:02.593373936Z" level=info msg="Ensure that sandbox 431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37 in task-service has been cleanup successfully" Jul 7 06:06:02.637842 containerd[1447]: time="2025-07-07T06:06:02.637716081Z" level=error msg="StopPodSandbox for \"5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40\" failed" error="failed to destroy network for sandbox \"5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:02.638159 kubelet[2474]: E0707 06:06:02.638041 2474 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" Jul 7 06:06:02.638226 kubelet[2474]: E0707 06:06:02.638182 2474 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40"} Jul 7 06:06:02.638257 kubelet[2474]: E0707 06:06:02.638241 2474 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"18c7c67e-01e0-40ea-8d99-ba460eb1fde4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:06:02.638334 kubelet[2474]: E0707 06:06:02.638265 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"18c7c67e-01e0-40ea-8d99-ba460eb1fde4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7fd865784c-rm28g" podUID="18c7c67e-01e0-40ea-8d99-ba460eb1fde4" Jul 7 06:06:02.654829 containerd[1447]: time="2025-07-07T06:06:02.654768524Z" level=error msg="StopPodSandbox for \"c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0\" failed" error="failed to destroy network for sandbox \"c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:02.655311 kubelet[2474]: E0707 06:06:02.655174 2474 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" Jul 7 06:06:02.655311 kubelet[2474]: E0707 06:06:02.655227 2474 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0"} Jul 7 06:06:02.655311 kubelet[2474]: E0707 06:06:02.655258 2474 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"632e2793-d8ae-43c1-a1dd-7d580aa97009\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:06:02.655311 kubelet[2474]: E0707 06:06:02.655279 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"632e2793-d8ae-43c1-a1dd-7d580aa97009\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jghp4" podUID="632e2793-d8ae-43c1-a1dd-7d580aa97009" Jul 7 06:06:02.655910 containerd[1447]: time="2025-07-07T06:06:02.655850451Z" level=error msg="StopPodSandbox for \"c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886\" failed" error="failed to destroy network for sandbox \"c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:02.656188 kubelet[2474]: E0707 06:06:02.656156 2474 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" Jul 7 06:06:02.656254 kubelet[2474]: E0707 06:06:02.656196 2474 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886"} Jul 7 06:06:02.656309 kubelet[2474]: E0707 06:06:02.656222 2474 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"532a5639-1429-4a85-8fbb-b79c8b04dfd3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:06:02.656365 kubelet[2474]: E0707 06:06:02.656309 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"532a5639-1429-4a85-8fbb-b79c8b04dfd3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56bdbd79df-xdcfc" podUID="532a5639-1429-4a85-8fbb-b79c8b04dfd3" Jul 7 06:06:02.659117 containerd[1447]: time="2025-07-07T06:06:02.659069628Z" level=error msg="StopPodSandbox for \"f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc\" failed" error="failed to destroy network for sandbox \"f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:02.659368 kubelet[2474]: E0707 06:06:02.659269 2474 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" Jul 7 06:06:02.659497 kubelet[2474]: E0707 06:06:02.659313 2474 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc"} Jul 7 06:06:02.659497 kubelet[2474]: E0707 06:06:02.659453 2474 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"66fd4b0e-a42b-41a5-a8e7-b55cecdc3007\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:06:02.659497 kubelet[2474]: E0707 06:06:02.659471 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"66fd4b0e-a42b-41a5-a8e7-b55cecdc3007\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-6bhqv" podUID="66fd4b0e-a42b-41a5-a8e7-b55cecdc3007" Jul 7 06:06:02.660236 containerd[1447]: time="2025-07-07T06:06:02.660084149Z" level=error msg="StopPodSandbox for \"71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163\" failed" error="failed to destroy network for sandbox \"71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:02.660443 kubelet[2474]: E0707 06:06:02.660413 2474 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" Jul 7 06:06:02.660486 kubelet[2474]: E0707 06:06:02.660448 2474 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163"} Jul 7 06:06:02.660486 kubelet[2474]: E0707 06:06:02.660473 2474 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cc511929-66e0-46d7-a11d-745edf05836f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:06:02.660574 kubelet[2474]: E0707 06:06:02.660490 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cc511929-66e0-46d7-a11d-745edf05836f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-656c4558cd-j6wss" podUID="cc511929-66e0-46d7-a11d-745edf05836f" Jul 7 06:06:02.662724 containerd[1447]: time="2025-07-07T06:06:02.662678996Z" level=error msg="StopPodSandbox for \"d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb\" failed" error="failed to destroy network for sandbox \"d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:02.663026 kubelet[2474]: E0707 06:06:02.662908 2474 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" Jul 7 06:06:02.663026 kubelet[2474]: E0707 06:06:02.662956 2474 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb"} Jul 7 06:06:02.663026 kubelet[2474]: E0707 06:06:02.662979 2474 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8ef96388-886e-494c-b484-12fab2731020\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:06:02.663026 kubelet[2474]: E0707 06:06:02.662998 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8ef96388-886e-494c-b484-12fab2731020\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-lczkp" podUID="8ef96388-886e-494c-b484-12fab2731020" Jul 7 06:06:02.663528 containerd[1447]: time="2025-07-07T06:06:02.663498422Z" level=error msg="StopPodSandbox for \"431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37\" failed" error="failed to destroy network for sandbox \"431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:02.663793 kubelet[2474]: E0707 06:06:02.663746 2474 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" Jul 7 06:06:02.663793 kubelet[2474]: E0707 06:06:02.663790 2474 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37"} Jul 7 06:06:02.663884 kubelet[2474]: E0707 06:06:02.663815 2474 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"528c9db9-c113-4e11-bd08-112e227a85e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:06:02.663884 kubelet[2474]: E0707 06:06:02.663832 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"528c9db9-c113-4e11-bd08-112e227a85e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56bdbd79df-bkhbd" podUID="528c9db9-c113-4e11-bd08-112e227a85e1" Jul 7 06:06:02.664072 containerd[1447]: time="2025-07-07T06:06:02.664028904Z" level=error msg="StopPodSandbox for \"3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4\" failed" error="failed to destroy network for sandbox \"3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:06:02.664382 kubelet[2474]: E0707 06:06:02.664246 2474 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" Jul 7 06:06:02.664382 kubelet[2474]: E0707 06:06:02.664286 2474 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4"} Jul 7 06:06:02.664382 kubelet[2474]: E0707 06:06:02.664310 2474 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cac05d19-02ca-4bd4-9a83-2d4df21aa5b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 06:06:02.664382 kubelet[2474]: E0707 06:06:02.664355 2474 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cac05d19-02ca-4bd4-9a83-2d4df21aa5b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-4gfqc" podUID="cac05d19-02ca-4bd4-9a83-2d4df21aa5b9" Jul 7 06:06:04.729448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount514944824.mount: Deactivated successfully. Jul 7 06:06:04.944088 containerd[1447]: time="2025-07-07T06:06:04.944023570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:04.946065 containerd[1447]: time="2025-07-07T06:06:04.945872147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 7 06:06:04.951705 containerd[1447]: time="2025-07-07T06:06:04.951648014Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:04.961024 containerd[1447]: time="2025-07-07T06:06:04.959705770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:04.961024 containerd[1447]: time="2025-07-07T06:06:04.960427944Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 3.399499384s" Jul 7 06:06:04.961024 containerd[1447]: time="2025-07-07T06:06:04.960456306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 7 06:06:04.969737 containerd[1447]: time="2025-07-07T06:06:04.969691949Z" level=info msg="CreateContainer within sandbox \"d73c6639a6a5cabdd9da12c3d26b4cfe1c73ca37eacca884f640ffcf6da86549\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 06:06:04.985427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1977029582.mount: Deactivated successfully. Jul 7 06:06:04.987701 containerd[1447]: time="2025-07-07T06:06:04.987635157Z" level=info msg="CreateContainer within sandbox \"d73c6639a6a5cabdd9da12c3d26b4cfe1c73ca37eacca884f640ffcf6da86549\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"540af7acd325165045918a259e82ac73bab0b92d327bd2e71c590b5c9c1ee7cd\"" Jul 7 06:06:04.988389 containerd[1447]: time="2025-07-07T06:06:04.988253963Z" level=info msg="StartContainer for \"540af7acd325165045918a259e82ac73bab0b92d327bd2e71c590b5c9c1ee7cd\"" Jul 7 06:06:05.040494 systemd[1]: Started cri-containerd-540af7acd325165045918a259e82ac73bab0b92d327bd2e71c590b5c9c1ee7cd.scope - libcontainer container 540af7acd325165045918a259e82ac73bab0b92d327bd2e71c590b5c9c1ee7cd. Jul 7 06:06:05.076749 containerd[1447]: time="2025-07-07T06:06:05.076699703Z" level=info msg="StartContainer for \"540af7acd325165045918a259e82ac73bab0b92d327bd2e71c590b5c9c1ee7cd\" returns successfully" Jul 7 06:06:05.358336 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 06:06:05.358486 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 06:06:05.474666 containerd[1447]: time="2025-07-07T06:06:05.474593831Z" level=info msg="StopPodSandbox for \"71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163\"" Jul 7 06:06:05.663302 kubelet[2474]: I0707 06:06:05.662794 2474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-q6wfx" podStartSLOduration=2.035566545 podStartE2EDuration="11.662762647s" podCreationTimestamp="2025-07-07 06:05:54 +0000 UTC" firstStartedPulling="2025-07-07 06:05:55.334886364 +0000 UTC m=+19.949383620" lastFinishedPulling="2025-07-07 06:06:04.962082506 +0000 UTC m=+29.576579722" observedRunningTime="2025-07-07 06:06:05.662214488 +0000 UTC m=+30.276711744" watchObservedRunningTime="2025-07-07 06:06:05.662762647 +0000 UTC m=+30.277259903" Jul 7 06:06:05.742467 containerd[1447]: 2025-07-07 06:06:05.594 [INFO][3764] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" Jul 7 06:06:05.742467 containerd[1447]: 2025-07-07 06:06:05.597 [INFO][3764] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" iface="eth0" netns="/var/run/netns/cni-eb95f2bb-34e3-2061-84f9-514bb21a53cd" Jul 7 06:06:05.742467 containerd[1447]: 2025-07-07 06:06:05.597 [INFO][3764] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" iface="eth0" netns="/var/run/netns/cni-eb95f2bb-34e3-2061-84f9-514bb21a53cd" Jul 7 06:06:05.742467 containerd[1447]: 2025-07-07 06:06:05.608 [INFO][3764] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" iface="eth0" netns="/var/run/netns/cni-eb95f2bb-34e3-2061-84f9-514bb21a53cd" Jul 7 06:06:05.742467 containerd[1447]: 2025-07-07 06:06:05.608 [INFO][3764] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" Jul 7 06:06:05.742467 containerd[1447]: 2025-07-07 06:06:05.608 [INFO][3764] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" Jul 7 06:06:05.742467 containerd[1447]: 2025-07-07 06:06:05.712 [INFO][3781] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" HandleID="k8s-pod-network.71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" Workload="localhost-k8s-whisker--656c4558cd--j6wss-eth0" Jul 7 06:06:05.742467 containerd[1447]: 2025-07-07 06:06:05.712 [INFO][3781] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:06:05.742467 containerd[1447]: 2025-07-07 06:06:05.714 [INFO][3781] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:06:05.742467 containerd[1447]: 2025-07-07 06:06:05.728 [WARNING][3781] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" HandleID="k8s-pod-network.71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" Workload="localhost-k8s-whisker--656c4558cd--j6wss-eth0" Jul 7 06:06:05.742467 containerd[1447]: 2025-07-07 06:06:05.728 [INFO][3781] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" HandleID="k8s-pod-network.71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" Workload="localhost-k8s-whisker--656c4558cd--j6wss-eth0" Jul 7 06:06:05.742467 containerd[1447]: 2025-07-07 06:06:05.730 [INFO][3781] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:06:05.742467 containerd[1447]: 2025-07-07 06:06:05.740 [INFO][3764] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" Jul 7 06:06:05.743505 containerd[1447]: time="2025-07-07T06:06:05.743186981Z" level=info msg="TearDown network for sandbox \"71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163\" successfully" Jul 7 06:06:05.743505 containerd[1447]: time="2025-07-07T06:06:05.743214903Z" level=info msg="StopPodSandbox for \"71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163\" returns successfully" Jul 7 06:06:05.747165 systemd[1]: run-netns-cni\x2deb95f2bb\x2d34e3\x2d2061\x2d84f9\x2d514bb21a53cd.mount: Deactivated successfully. Jul 7 06:06:05.807837 kubelet[2474]: I0707 06:06:05.807659 2474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j94kb\" (UniqueName: \"kubernetes.io/projected/cc511929-66e0-46d7-a11d-745edf05836f-kube-api-access-j94kb\") pod \"cc511929-66e0-46d7-a11d-745edf05836f\" (UID: \"cc511929-66e0-46d7-a11d-745edf05836f\") " Jul 7 06:06:05.807837 kubelet[2474]: I0707 06:06:05.807696 2474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cc511929-66e0-46d7-a11d-745edf05836f-whisker-backend-key-pair\") pod \"cc511929-66e0-46d7-a11d-745edf05836f\" (UID: \"cc511929-66e0-46d7-a11d-745edf05836f\") " Jul 7 06:06:05.807837 kubelet[2474]: I0707 06:06:05.807724 2474 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc511929-66e0-46d7-a11d-745edf05836f-whisker-ca-bundle\") pod \"cc511929-66e0-46d7-a11d-745edf05836f\" (UID: \"cc511929-66e0-46d7-a11d-745edf05836f\") " Jul 7 06:06:05.811277 kubelet[2474]: I0707 06:06:05.811195 2474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc511929-66e0-46d7-a11d-745edf05836f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "cc511929-66e0-46d7-a11d-745edf05836f" (UID: "cc511929-66e0-46d7-a11d-745edf05836f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 06:06:05.812500 kubelet[2474]: I0707 06:06:05.812457 2474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc511929-66e0-46d7-a11d-745edf05836f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "cc511929-66e0-46d7-a11d-745edf05836f" (UID: "cc511929-66e0-46d7-a11d-745edf05836f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 7 06:06:05.813487 kubelet[2474]: I0707 06:06:05.813437 2474 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc511929-66e0-46d7-a11d-745edf05836f-kube-api-access-j94kb" (OuterVolumeSpecName: "kube-api-access-j94kb") pod "cc511929-66e0-46d7-a11d-745edf05836f" (UID: "cc511929-66e0-46d7-a11d-745edf05836f"). InnerVolumeSpecName "kube-api-access-j94kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 06:06:05.813977 systemd[1]: var-lib-kubelet-pods-cc511929\x2d66e0\x2d46d7\x2da11d\x2d745edf05836f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj94kb.mount: Deactivated successfully. Jul 7 06:06:05.814084 systemd[1]: var-lib-kubelet-pods-cc511929\x2d66e0\x2d46d7\x2da11d\x2d745edf05836f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 06:06:05.908947 kubelet[2474]: I0707 06:06:05.908826 2474 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j94kb\" (UniqueName: \"kubernetes.io/projected/cc511929-66e0-46d7-a11d-745edf05836f-kube-api-access-j94kb\") on node \"localhost\" DevicePath \"\"" Jul 7 06:06:05.908947 kubelet[2474]: I0707 06:06:05.908866 2474 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cc511929-66e0-46d7-a11d-745edf05836f-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 7 06:06:05.908947 kubelet[2474]: I0707 06:06:05.908876 2474 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc511929-66e0-46d7-a11d-745edf05836f-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 7 06:06:06.644991 systemd[1]: Removed slice kubepods-besteffort-podcc511929_66e0_46d7_a11d_745edf05836f.slice - libcontainer container kubepods-besteffort-podcc511929_66e0_46d7_a11d_745edf05836f.slice. Jul 7 06:06:06.703156 systemd[1]: Created slice kubepods-besteffort-pod4be44f24_d777_46ec_9b48_5bac50250a5c.slice - libcontainer container kubepods-besteffort-pod4be44f24_d777_46ec_9b48_5bac50250a5c.slice. Jul 7 06:06:06.715449 kubelet[2474]: I0707 06:06:06.715405 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4be44f24-d777-46ec-9b48-5bac50250a5c-whisker-ca-bundle\") pod \"whisker-7bbc5cd494-mv2rf\" (UID: \"4be44f24-d777-46ec-9b48-5bac50250a5c\") " pod="calico-system/whisker-7bbc5cd494-mv2rf" Jul 7 06:06:06.715979 kubelet[2474]: I0707 06:06:06.715879 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4be44f24-d777-46ec-9b48-5bac50250a5c-whisker-backend-key-pair\") pod \"whisker-7bbc5cd494-mv2rf\" (UID: \"4be44f24-d777-46ec-9b48-5bac50250a5c\") " pod="calico-system/whisker-7bbc5cd494-mv2rf" Jul 7 06:06:06.715979 kubelet[2474]: I0707 06:06:06.715921 2474 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8gkn\" (UniqueName: \"kubernetes.io/projected/4be44f24-d777-46ec-9b48-5bac50250a5c-kube-api-access-x8gkn\") pod \"whisker-7bbc5cd494-mv2rf\" (UID: \"4be44f24-d777-46ec-9b48-5bac50250a5c\") " pod="calico-system/whisker-7bbc5cd494-mv2rf" Jul 7 06:06:07.010626 containerd[1447]: time="2025-07-07T06:06:07.010580485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bbc5cd494-mv2rf,Uid:4be44f24-d777-46ec-9b48-5bac50250a5c,Namespace:calico-system,Attempt:0,}" Jul 7 06:06:07.232521 systemd-networkd[1379]: calib25520b4ab0: Link UP Jul 7 06:06:07.233087 systemd-networkd[1379]: calib25520b4ab0: Gained carrier Jul 7 06:06:07.253557 containerd[1447]: 2025-07-07 06:06:07.148 [INFO][3950] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:06:07.253557 containerd[1447]: 2025-07-07 06:06:07.162 [INFO][3950] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7bbc5cd494--mv2rf-eth0 whisker-7bbc5cd494- calico-system 4be44f24-d777-46ec-9b48-5bac50250a5c 876 0 2025-07-07 06:06:06 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7bbc5cd494 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7bbc5cd494-mv2rf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib25520b4ab0 [] [] }} ContainerID="503aaa5aae9544aaddfee637b70466bf05023e73e4bbad81b83ff76a6fb00862" Namespace="calico-system" Pod="whisker-7bbc5cd494-mv2rf" WorkloadEndpoint="localhost-k8s-whisker--7bbc5cd494--mv2rf-" Jul 7 06:06:07.253557 containerd[1447]: 2025-07-07 06:06:07.162 [INFO][3950] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="503aaa5aae9544aaddfee637b70466bf05023e73e4bbad81b83ff76a6fb00862" Namespace="calico-system" Pod="whisker-7bbc5cd494-mv2rf" WorkloadEndpoint="localhost-k8s-whisker--7bbc5cd494--mv2rf-eth0" Jul 7 06:06:07.253557 containerd[1447]: 2025-07-07 06:06:07.184 [INFO][3965] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="503aaa5aae9544aaddfee637b70466bf05023e73e4bbad81b83ff76a6fb00862" HandleID="k8s-pod-network.503aaa5aae9544aaddfee637b70466bf05023e73e4bbad81b83ff76a6fb00862" Workload="localhost-k8s-whisker--7bbc5cd494--mv2rf-eth0" Jul 7 06:06:07.253557 containerd[1447]: 2025-07-07 06:06:07.185 [INFO][3965] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="503aaa5aae9544aaddfee637b70466bf05023e73e4bbad81b83ff76a6fb00862" HandleID="k8s-pod-network.503aaa5aae9544aaddfee637b70466bf05023e73e4bbad81b83ff76a6fb00862" Workload="localhost-k8s-whisker--7bbc5cd494--mv2rf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003235e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7bbc5cd494-mv2rf", "timestamp":"2025-07-07 06:06:07.184908219 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:06:07.253557 containerd[1447]: 2025-07-07 06:06:07.185 [INFO][3965] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:06:07.253557 containerd[1447]: 2025-07-07 06:06:07.185 [INFO][3965] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:06:07.253557 containerd[1447]: 2025-07-07 06:06:07.185 [INFO][3965] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:06:07.253557 containerd[1447]: 2025-07-07 06:06:07.195 [INFO][3965] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.503aaa5aae9544aaddfee637b70466bf05023e73e4bbad81b83ff76a6fb00862" host="localhost" Jul 7 06:06:07.253557 containerd[1447]: 2025-07-07 06:06:07.203 [INFO][3965] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:06:07.253557 containerd[1447]: 2025-07-07 06:06:07.207 [INFO][3965] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:06:07.253557 containerd[1447]: 2025-07-07 06:06:07.208 [INFO][3965] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:06:07.253557 containerd[1447]: 2025-07-07 06:06:07.210 [INFO][3965] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:06:07.253557 containerd[1447]: 2025-07-07 06:06:07.210 [INFO][3965] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.503aaa5aae9544aaddfee637b70466bf05023e73e4bbad81b83ff76a6fb00862" host="localhost" Jul 7 06:06:07.253557 containerd[1447]: 2025-07-07 06:06:07.212 [INFO][3965] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.503aaa5aae9544aaddfee637b70466bf05023e73e4bbad81b83ff76a6fb00862 Jul 7 06:06:07.253557 containerd[1447]: 2025-07-07 06:06:07.215 [INFO][3965] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.503aaa5aae9544aaddfee637b70466bf05023e73e4bbad81b83ff76a6fb00862" host="localhost" Jul 7 06:06:07.253557 containerd[1447]: 2025-07-07 06:06:07.221 [INFO][3965] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.503aaa5aae9544aaddfee637b70466bf05023e73e4bbad81b83ff76a6fb00862" host="localhost" Jul 7 06:06:07.253557 containerd[1447]: 2025-07-07 06:06:07.221 [INFO][3965] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.503aaa5aae9544aaddfee637b70466bf05023e73e4bbad81b83ff76a6fb00862" host="localhost" Jul 7 06:06:07.253557 containerd[1447]: 2025-07-07 06:06:07.221 [INFO][3965] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:06:07.253557 containerd[1447]: 2025-07-07 06:06:07.221 [INFO][3965] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="503aaa5aae9544aaddfee637b70466bf05023e73e4bbad81b83ff76a6fb00862" HandleID="k8s-pod-network.503aaa5aae9544aaddfee637b70466bf05023e73e4bbad81b83ff76a6fb00862" Workload="localhost-k8s-whisker--7bbc5cd494--mv2rf-eth0" Jul 7 06:06:07.254200 containerd[1447]: 2025-07-07 06:06:07.223 [INFO][3950] cni-plugin/k8s.go 418: Populated endpoint ContainerID="503aaa5aae9544aaddfee637b70466bf05023e73e4bbad81b83ff76a6fb00862" Namespace="calico-system" Pod="whisker-7bbc5cd494-mv2rf" WorkloadEndpoint="localhost-k8s-whisker--7bbc5cd494--mv2rf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7bbc5cd494--mv2rf-eth0", GenerateName:"whisker-7bbc5cd494-", Namespace:"calico-system", SelfLink:"", UID:"4be44f24-d777-46ec-9b48-5bac50250a5c", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 6, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7bbc5cd494", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7bbc5cd494-mv2rf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib25520b4ab0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:06:07.254200 containerd[1447]: 2025-07-07 06:06:07.223 [INFO][3950] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="503aaa5aae9544aaddfee637b70466bf05023e73e4bbad81b83ff76a6fb00862" Namespace="calico-system" Pod="whisker-7bbc5cd494-mv2rf" WorkloadEndpoint="localhost-k8s-whisker--7bbc5cd494--mv2rf-eth0" Jul 7 06:06:07.254200 containerd[1447]: 2025-07-07 06:06:07.225 [INFO][3950] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib25520b4ab0 ContainerID="503aaa5aae9544aaddfee637b70466bf05023e73e4bbad81b83ff76a6fb00862" Namespace="calico-system" Pod="whisker-7bbc5cd494-mv2rf" WorkloadEndpoint="localhost-k8s-whisker--7bbc5cd494--mv2rf-eth0" Jul 7 06:06:07.254200 containerd[1447]: 2025-07-07 06:06:07.235 [INFO][3950] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="503aaa5aae9544aaddfee637b70466bf05023e73e4bbad81b83ff76a6fb00862" Namespace="calico-system" Pod="whisker-7bbc5cd494-mv2rf" WorkloadEndpoint="localhost-k8s-whisker--7bbc5cd494--mv2rf-eth0" Jul 7 06:06:07.254200 containerd[1447]: 2025-07-07 06:06:07.235 [INFO][3950] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="503aaa5aae9544aaddfee637b70466bf05023e73e4bbad81b83ff76a6fb00862" Namespace="calico-system" Pod="whisker-7bbc5cd494-mv2rf" WorkloadEndpoint="localhost-k8s-whisker--7bbc5cd494--mv2rf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7bbc5cd494--mv2rf-eth0", GenerateName:"whisker-7bbc5cd494-", Namespace:"calico-system", SelfLink:"", UID:"4be44f24-d777-46ec-9b48-5bac50250a5c", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 6, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7bbc5cd494", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"503aaa5aae9544aaddfee637b70466bf05023e73e4bbad81b83ff76a6fb00862", Pod:"whisker-7bbc5cd494-mv2rf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib25520b4ab0", MAC:"42:ef:77:28:cc:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:06:07.254200 containerd[1447]: 2025-07-07 06:06:07.250 [INFO][3950] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="503aaa5aae9544aaddfee637b70466bf05023e73e4bbad81b83ff76a6fb00862" Namespace="calico-system" Pod="whisker-7bbc5cd494-mv2rf" WorkloadEndpoint="localhost-k8s-whisker--7bbc5cd494--mv2rf-eth0" Jul 7 06:06:07.278998 containerd[1447]: time="2025-07-07T06:06:07.278853376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:06:07.278998 containerd[1447]: time="2025-07-07T06:06:07.278906700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:06:07.279591 containerd[1447]: time="2025-07-07T06:06:07.279266524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:06:07.280035 containerd[1447]: time="2025-07-07T06:06:07.279809360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:06:07.306497 systemd[1]: Started cri-containerd-503aaa5aae9544aaddfee637b70466bf05023e73e4bbad81b83ff76a6fb00862.scope - libcontainer container 503aaa5aae9544aaddfee637b70466bf05023e73e4bbad81b83ff76a6fb00862. Jul 7 06:06:07.316410 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:06:07.334303 containerd[1447]: time="2025-07-07T06:06:07.334260615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bbc5cd494-mv2rf,Uid:4be44f24-d777-46ec-9b48-5bac50250a5c,Namespace:calico-system,Attempt:0,} returns sandbox id \"503aaa5aae9544aaddfee637b70466bf05023e73e4bbad81b83ff76a6fb00862\"" Jul 7 06:06:07.335920 containerd[1447]: time="2025-07-07T06:06:07.335743834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 06:06:07.467188 kubelet[2474]: I0707 06:06:07.467142 2474 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc511929-66e0-46d7-a11d-745edf05836f" path="/var/lib/kubelet/pods/cc511929-66e0-46d7-a11d-745edf05836f/volumes" Jul 7 06:06:07.823538 systemd[1]: run-containerd-runc-k8s.io-503aaa5aae9544aaddfee637b70466bf05023e73e4bbad81b83ff76a6fb00862-runc.copLPf.mount: Deactivated successfully. Jul 7 06:06:08.352091 containerd[1447]: time="2025-07-07T06:06:08.352034930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:08.352926 containerd[1447]: time="2025-07-07T06:06:08.352532242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 7 06:06:08.353363 containerd[1447]: time="2025-07-07T06:06:08.353334973Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:08.356412 containerd[1447]: time="2025-07-07T06:06:08.356368848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:08.357240 containerd[1447]: time="2025-07-07T06:06:08.357204902Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.021425226s" Jul 7 06:06:08.357368 containerd[1447]: time="2025-07-07T06:06:08.357348071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 7 06:06:08.360516 containerd[1447]: time="2025-07-07T06:06:08.360481752Z" level=info msg="CreateContainer within sandbox \"503aaa5aae9544aaddfee637b70466bf05023e73e4bbad81b83ff76a6fb00862\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 06:06:08.372301 containerd[1447]: time="2025-07-07T06:06:08.372255347Z" level=info msg="CreateContainer within sandbox \"503aaa5aae9544aaddfee637b70466bf05023e73e4bbad81b83ff76a6fb00862\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"2561f02ce5a2480b4f91a0867fd36f4448f10db35b30575dce8e4816619f6812\"" Jul 7 06:06:08.372854 containerd[1447]: time="2025-07-07T06:06:08.372721137Z" level=info msg="StartContainer for \"2561f02ce5a2480b4f91a0867fd36f4448f10db35b30575dce8e4816619f6812\"" Jul 7 06:06:08.401498 systemd[1]: Started cri-containerd-2561f02ce5a2480b4f91a0867fd36f4448f10db35b30575dce8e4816619f6812.scope - libcontainer container 2561f02ce5a2480b4f91a0867fd36f4448f10db35b30575dce8e4816619f6812. Jul 7 06:06:08.430845 containerd[1447]: time="2025-07-07T06:06:08.430804825Z" level=info msg="StartContainer for \"2561f02ce5a2480b4f91a0867fd36f4448f10db35b30575dce8e4816619f6812\" returns successfully" Jul 7 06:06:08.434684 containerd[1447]: time="2025-07-07T06:06:08.434654752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 06:06:09.249476 systemd-networkd[1379]: calib25520b4ab0: Gained IPv6LL Jul 7 06:06:09.859639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3095903353.mount: Deactivated successfully. Jul 7 06:06:09.875577 containerd[1447]: time="2025-07-07T06:06:09.875540673Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:09.877055 containerd[1447]: time="2025-07-07T06:06:09.876867035Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 7 06:06:09.877958 containerd[1447]: time="2025-07-07T06:06:09.877700407Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:09.880788 containerd[1447]: time="2025-07-07T06:06:09.880541344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:09.881296 containerd[1447]: time="2025-07-07T06:06:09.881269749Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.446579235s" Jul 7 06:06:09.881354 containerd[1447]: time="2025-07-07T06:06:09.881301631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 7 06:06:09.883963 containerd[1447]: time="2025-07-07T06:06:09.883934594Z" level=info msg="CreateContainer within sandbox \"503aaa5aae9544aaddfee637b70466bf05023e73e4bbad81b83ff76a6fb00862\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 06:06:09.897601 containerd[1447]: time="2025-07-07T06:06:09.897485516Z" level=info msg="CreateContainer within sandbox \"503aaa5aae9544aaddfee637b70466bf05023e73e4bbad81b83ff76a6fb00862\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"4cf7f62acde5b17dd9dd8849c9657c0e2b2f89890bf62c29a4e84d296ea3db4c\"" Jul 7 06:06:09.898138 containerd[1447]: time="2025-07-07T06:06:09.898095233Z" level=info msg="StartContainer for \"4cf7f62acde5b17dd9dd8849c9657c0e2b2f89890bf62c29a4e84d296ea3db4c\"" Jul 7 06:06:09.941517 systemd[1]: Started cri-containerd-4cf7f62acde5b17dd9dd8849c9657c0e2b2f89890bf62c29a4e84d296ea3db4c.scope - libcontainer container 4cf7f62acde5b17dd9dd8849c9657c0e2b2f89890bf62c29a4e84d296ea3db4c. Jul 7 06:06:09.972629 containerd[1447]: time="2025-07-07T06:06:09.972534775Z" level=info msg="StartContainer for \"4cf7f62acde5b17dd9dd8849c9657c0e2b2f89890bf62c29a4e84d296ea3db4c\" returns successfully" Jul 7 06:06:13.467042 containerd[1447]: time="2025-07-07T06:06:13.466519993Z" level=info msg="StopPodSandbox for \"3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4\"" Jul 7 06:06:13.467042 containerd[1447]: time="2025-07-07T06:06:13.466767047Z" level=info msg="StopPodSandbox for \"d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb\"" Jul 7 06:06:13.467042 containerd[1447]: time="2025-07-07T06:06:13.466799569Z" level=info msg="StopPodSandbox for \"f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc\"" Jul 7 06:06:13.467042 containerd[1447]: time="2025-07-07T06:06:13.467034942Z" level=info msg="StopPodSandbox for \"431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37\"" Jul 7 06:06:13.521799 kubelet[2474]: I0707 06:06:13.521076 2474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7bbc5cd494-mv2rf" podStartSLOduration=4.974535054 podStartE2EDuration="7.521059792s" podCreationTimestamp="2025-07-07 06:06:06 +0000 UTC" firstStartedPulling="2025-07-07 06:06:07.335471536 +0000 UTC m=+31.949968752" lastFinishedPulling="2025-07-07 06:06:09.881996234 +0000 UTC m=+34.496493490" observedRunningTime="2025-07-07 06:06:10.659873723 +0000 UTC m=+35.274370979" watchObservedRunningTime="2025-07-07 06:06:13.521059792 +0000 UTC m=+38.135557048" Jul 7 06:06:13.575408 containerd[1447]: 2025-07-07 06:06:13.530 [INFO][4326] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" Jul 7 06:06:13.575408 containerd[1447]: 2025-07-07 06:06:13.531 [INFO][4326] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" iface="eth0" netns="/var/run/netns/cni-75112029-651b-0d39-59d6-ff8ef7691c48" Jul 7 06:06:13.575408 containerd[1447]: 2025-07-07 06:06:13.531 [INFO][4326] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" iface="eth0" netns="/var/run/netns/cni-75112029-651b-0d39-59d6-ff8ef7691c48" Jul 7 06:06:13.575408 containerd[1447]: 2025-07-07 06:06:13.534 [INFO][4326] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" iface="eth0" netns="/var/run/netns/cni-75112029-651b-0d39-59d6-ff8ef7691c48" Jul 7 06:06:13.575408 containerd[1447]: 2025-07-07 06:06:13.534 [INFO][4326] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" Jul 7 06:06:13.575408 containerd[1447]: 2025-07-07 06:06:13.534 [INFO][4326] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" Jul 7 06:06:13.575408 containerd[1447]: 2025-07-07 06:06:13.560 [INFO][4373] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" HandleID="k8s-pod-network.3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" Workload="localhost-k8s-coredns--7c65d6cfc9--4gfqc-eth0" Jul 7 06:06:13.575408 containerd[1447]: 2025-07-07 06:06:13.560 [INFO][4373] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:06:13.575408 containerd[1447]: 2025-07-07 06:06:13.560 [INFO][4373] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:06:13.575408 containerd[1447]: 2025-07-07 06:06:13.568 [WARNING][4373] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" HandleID="k8s-pod-network.3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" Workload="localhost-k8s-coredns--7c65d6cfc9--4gfqc-eth0" Jul 7 06:06:13.575408 containerd[1447]: 2025-07-07 06:06:13.568 [INFO][4373] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" HandleID="k8s-pod-network.3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" Workload="localhost-k8s-coredns--7c65d6cfc9--4gfqc-eth0" Jul 7 06:06:13.575408 containerd[1447]: 2025-07-07 06:06:13.570 [INFO][4373] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:06:13.575408 containerd[1447]: 2025-07-07 06:06:13.573 [INFO][4326] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" Jul 7 06:06:13.575870 containerd[1447]: time="2025-07-07T06:06:13.575532666Z" level=info msg="TearDown network for sandbox \"3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4\" successfully" Jul 7 06:06:13.575870 containerd[1447]: time="2025-07-07T06:06:13.575559548Z" level=info msg="StopPodSandbox for \"3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4\" returns successfully" Jul 7 06:06:13.576185 kubelet[2474]: E0707 06:06:13.576153 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:06:13.577880 containerd[1447]: time="2025-07-07T06:06:13.577755308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4gfqc,Uid:cac05d19-02ca-4bd4-9a83-2d4df21aa5b9,Namespace:kube-system,Attempt:1,}" Jul 7 06:06:13.579697 systemd[1]: run-netns-cni\x2d75112029\x2d651b\x2d0d39\x2d59d6\x2dff8ef7691c48.mount: Deactivated successfully. Jul 7 06:06:13.588407 containerd[1447]: 2025-07-07 06:06:13.520 [INFO][4321] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" Jul 7 06:06:13.588407 containerd[1447]: 2025-07-07 06:06:13.521 [INFO][4321] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" iface="eth0" netns="/var/run/netns/cni-b3ccefd7-4d64-4bbb-eec9-08c8ae9b2cce" Jul 7 06:06:13.588407 containerd[1447]: 2025-07-07 06:06:13.521 [INFO][4321] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" iface="eth0" netns="/var/run/netns/cni-b3ccefd7-4d64-4bbb-eec9-08c8ae9b2cce" Jul 7 06:06:13.588407 containerd[1447]: 2025-07-07 06:06:13.521 [INFO][4321] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" iface="eth0" netns="/var/run/netns/cni-b3ccefd7-4d64-4bbb-eec9-08c8ae9b2cce" Jul 7 06:06:13.588407 containerd[1447]: 2025-07-07 06:06:13.521 [INFO][4321] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" Jul 7 06:06:13.588407 containerd[1447]: 2025-07-07 06:06:13.521 [INFO][4321] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" Jul 7 06:06:13.588407 containerd[1447]: 2025-07-07 06:06:13.567 [INFO][4358] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" HandleID="k8s-pod-network.f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" Workload="localhost-k8s-goldmane--58fd7646b9--6bhqv-eth0" Jul 7 06:06:13.588407 containerd[1447]: 2025-07-07 06:06:13.567 [INFO][4358] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:06:13.588407 containerd[1447]: 2025-07-07 06:06:13.570 [INFO][4358] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:06:13.588407 containerd[1447]: 2025-07-07 06:06:13.581 [WARNING][4358] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" HandleID="k8s-pod-network.f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" Workload="localhost-k8s-goldmane--58fd7646b9--6bhqv-eth0" Jul 7 06:06:13.588407 containerd[1447]: 2025-07-07 06:06:13.581 [INFO][4358] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" HandleID="k8s-pod-network.f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" Workload="localhost-k8s-goldmane--58fd7646b9--6bhqv-eth0" Jul 7 06:06:13.588407 containerd[1447]: 2025-07-07 06:06:13.582 [INFO][4358] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:06:13.588407 containerd[1447]: 2025-07-07 06:06:13.584 [INFO][4321] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" Jul 7 06:06:13.588407 containerd[1447]: time="2025-07-07T06:06:13.587818102Z" level=info msg="TearDown network for sandbox \"f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc\" successfully" Jul 7 06:06:13.588407 containerd[1447]: time="2025-07-07T06:06:13.587839663Z" level=info msg="StopPodSandbox for \"f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc\" returns successfully" Jul 7 06:06:13.589049 containerd[1447]: time="2025-07-07T06:06:13.589001527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-6bhqv,Uid:66fd4b0e-a42b-41a5-a8e7-b55cecdc3007,Namespace:calico-system,Attempt:1,}" Jul 7 06:06:13.589874 systemd[1]: run-netns-cni\x2db3ccefd7\x2d4d64\x2d4bbb\x2deec9\x2d08c8ae9b2cce.mount: Deactivated successfully. Jul 7 06:06:13.610311 containerd[1447]: 2025-07-07 06:06:13.530 [INFO][4331] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" Jul 7 06:06:13.610311 containerd[1447]: 2025-07-07 06:06:13.531 [INFO][4331] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" iface="eth0" netns="/var/run/netns/cni-187abb75-be31-3f83-f322-9d5381d7d87d" Jul 7 06:06:13.610311 containerd[1447]: 2025-07-07 06:06:13.532 [INFO][4331] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" iface="eth0" netns="/var/run/netns/cni-187abb75-be31-3f83-f322-9d5381d7d87d" Jul 7 06:06:13.610311 containerd[1447]: 2025-07-07 06:06:13.532 [INFO][4331] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" iface="eth0" netns="/var/run/netns/cni-187abb75-be31-3f83-f322-9d5381d7d87d" Jul 7 06:06:13.610311 containerd[1447]: 2025-07-07 06:06:13.532 [INFO][4331] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" Jul 7 06:06:13.610311 containerd[1447]: 2025-07-07 06:06:13.532 [INFO][4331] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" Jul 7 06:06:13.610311 containerd[1447]: 2025-07-07 06:06:13.570 [INFO][4367] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" HandleID="k8s-pod-network.d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" Workload="localhost-k8s-coredns--7c65d6cfc9--lczkp-eth0" Jul 7 06:06:13.610311 containerd[1447]: 2025-07-07 06:06:13.571 [INFO][4367] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:06:13.610311 containerd[1447]: 2025-07-07 06:06:13.582 [INFO][4367] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:06:13.610311 containerd[1447]: 2025-07-07 06:06:13.597 [WARNING][4367] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" HandleID="k8s-pod-network.d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" Workload="localhost-k8s-coredns--7c65d6cfc9--lczkp-eth0" Jul 7 06:06:13.610311 containerd[1447]: 2025-07-07 06:06:13.597 [INFO][4367] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" HandleID="k8s-pod-network.d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" Workload="localhost-k8s-coredns--7c65d6cfc9--lczkp-eth0" Jul 7 06:06:13.610311 containerd[1447]: 2025-07-07 06:06:13.600 [INFO][4367] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:06:13.610311 containerd[1447]: 2025-07-07 06:06:13.608 [INFO][4331] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" Jul 7 06:06:13.610911 containerd[1447]: time="2025-07-07T06:06:13.610881729Z" level=info msg="TearDown network for sandbox \"d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb\" successfully" Jul 7 06:06:13.611002 containerd[1447]: time="2025-07-07T06:06:13.610988495Z" level=info msg="StopPodSandbox for \"d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb\" returns successfully" Jul 7 06:06:13.611461 kubelet[2474]: E0707 06:06:13.611427 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:06:13.612377 containerd[1447]: time="2025-07-07T06:06:13.612209842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lczkp,Uid:8ef96388-886e-494c-b484-12fab2731020,Namespace:kube-system,Attempt:1,}" Jul 7 06:06:13.634480 containerd[1447]: 2025-07-07 06:06:13.555 [INFO][4343] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" Jul 7 06:06:13.634480 containerd[1447]: 2025-07-07 06:06:13.555 [INFO][4343] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" iface="eth0" netns="/var/run/netns/cni-1d98e045-639f-f687-8529-40917b013c09" Jul 7 06:06:13.634480 containerd[1447]: 2025-07-07 06:06:13.555 [INFO][4343] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" iface="eth0" netns="/var/run/netns/cni-1d98e045-639f-f687-8529-40917b013c09" Jul 7 06:06:13.634480 containerd[1447]: 2025-07-07 06:06:13.556 [INFO][4343] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" iface="eth0" netns="/var/run/netns/cni-1d98e045-639f-f687-8529-40917b013c09" Jul 7 06:06:13.634480 containerd[1447]: 2025-07-07 06:06:13.556 [INFO][4343] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" Jul 7 06:06:13.634480 containerd[1447]: 2025-07-07 06:06:13.556 [INFO][4343] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" Jul 7 06:06:13.634480 containerd[1447]: 2025-07-07 06:06:13.608 [INFO][4382] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" HandleID="k8s-pod-network.431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" Workload="localhost-k8s-calico--apiserver--56bdbd79df--bkhbd-eth0" Jul 7 06:06:13.634480 containerd[1447]: 2025-07-07 06:06:13.608 [INFO][4382] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:06:13.634480 containerd[1447]: 2025-07-07 06:06:13.608 [INFO][4382] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:06:13.634480 containerd[1447]: 2025-07-07 06:06:13.620 [WARNING][4382] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" HandleID="k8s-pod-network.431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" Workload="localhost-k8s-calico--apiserver--56bdbd79df--bkhbd-eth0" Jul 7 06:06:13.634480 containerd[1447]: 2025-07-07 06:06:13.620 [INFO][4382] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" HandleID="k8s-pod-network.431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" Workload="localhost-k8s-calico--apiserver--56bdbd79df--bkhbd-eth0" Jul 7 06:06:13.634480 containerd[1447]: 2025-07-07 06:06:13.624 [INFO][4382] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:06:13.634480 containerd[1447]: 2025-07-07 06:06:13.631 [INFO][4343] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" Jul 7 06:06:13.635412 containerd[1447]: time="2025-07-07T06:06:13.634610954Z" level=info msg="TearDown network for sandbox \"431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37\" successfully" Jul 7 06:06:13.635412 containerd[1447]: time="2025-07-07T06:06:13.634637795Z" level=info msg="StopPodSandbox for \"431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37\" returns successfully" Jul 7 06:06:13.635412 containerd[1447]: time="2025-07-07T06:06:13.635250389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56bdbd79df-bkhbd,Uid:528c9db9-c113-4e11-bd08-112e227a85e1,Namespace:calico-apiserver,Attempt:1,}" Jul 7 06:06:13.733779 systemd-networkd[1379]: cali56917c25625: Link UP Jul 7 06:06:13.734611 systemd-networkd[1379]: cali56917c25625: Gained carrier Jul 7 06:06:13.750425 containerd[1447]: 2025-07-07 06:06:13.629 [INFO][4403] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:06:13.750425 containerd[1447]: 2025-07-07 06:06:13.649 [INFO][4403] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--6bhqv-eth0 goldmane-58fd7646b9- calico-system 66fd4b0e-a42b-41a5-a8e7-b55cecdc3007 915 0 2025-07-07 06:05:55 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-6bhqv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali56917c25625 [] [] }} ContainerID="07a2fd3ab1b62d5a4be8c25f553a5c97f840cdbf074b810e8b2cded761303a67" Namespace="calico-system" Pod="goldmane-58fd7646b9-6bhqv" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--6bhqv-" Jul 7 06:06:13.750425 containerd[1447]: 2025-07-07 06:06:13.650 [INFO][4403] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="07a2fd3ab1b62d5a4be8c25f553a5c97f840cdbf074b810e8b2cded761303a67" Namespace="calico-system" Pod="goldmane-58fd7646b9-6bhqv" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--6bhqv-eth0" Jul 7 06:06:13.750425 containerd[1447]: 2025-07-07 06:06:13.684 [INFO][4446] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07a2fd3ab1b62d5a4be8c25f553a5c97f840cdbf074b810e8b2cded761303a67" HandleID="k8s-pod-network.07a2fd3ab1b62d5a4be8c25f553a5c97f840cdbf074b810e8b2cded761303a67" Workload="localhost-k8s-goldmane--58fd7646b9--6bhqv-eth0" Jul 7 06:06:13.750425 containerd[1447]: 2025-07-07 06:06:13.684 [INFO][4446] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="07a2fd3ab1b62d5a4be8c25f553a5c97f840cdbf074b810e8b2cded761303a67" HandleID="k8s-pod-network.07a2fd3ab1b62d5a4be8c25f553a5c97f840cdbf074b810e8b2cded761303a67" Workload="localhost-k8s-goldmane--58fd7646b9--6bhqv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400012f6b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-6bhqv", "timestamp":"2025-07-07 06:06:13.684670346 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:06:13.750425 containerd[1447]: 2025-07-07 06:06:13.684 [INFO][4446] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:06:13.750425 containerd[1447]: 2025-07-07 06:06:13.684 [INFO][4446] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:06:13.750425 containerd[1447]: 2025-07-07 06:06:13.684 [INFO][4446] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:06:13.750425 containerd[1447]: 2025-07-07 06:06:13.702 [INFO][4446] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.07a2fd3ab1b62d5a4be8c25f553a5c97f840cdbf074b810e8b2cded761303a67" host="localhost" Jul 7 06:06:13.750425 containerd[1447]: 2025-07-07 06:06:13.706 [INFO][4446] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:06:13.750425 containerd[1447]: 2025-07-07 06:06:13.711 [INFO][4446] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:06:13.750425 containerd[1447]: 2025-07-07 06:06:13.713 [INFO][4446] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:06:13.750425 containerd[1447]: 2025-07-07 06:06:13.715 [INFO][4446] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:06:13.750425 containerd[1447]: 2025-07-07 06:06:13.715 [INFO][4446] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.07a2fd3ab1b62d5a4be8c25f553a5c97f840cdbf074b810e8b2cded761303a67" host="localhost" Jul 7 06:06:13.750425 containerd[1447]: 2025-07-07 06:06:13.716 [INFO][4446] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.07a2fd3ab1b62d5a4be8c25f553a5c97f840cdbf074b810e8b2cded761303a67 Jul 7 06:06:13.750425 containerd[1447]: 2025-07-07 06:06:13.721 [INFO][4446] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.07a2fd3ab1b62d5a4be8c25f553a5c97f840cdbf074b810e8b2cded761303a67" host="localhost" Jul 7 06:06:13.750425 containerd[1447]: 2025-07-07 06:06:13.726 [INFO][4446] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.07a2fd3ab1b62d5a4be8c25f553a5c97f840cdbf074b810e8b2cded761303a67" host="localhost" Jul 7 06:06:13.750425 containerd[1447]: 2025-07-07 06:06:13.726 [INFO][4446] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.07a2fd3ab1b62d5a4be8c25f553a5c97f840cdbf074b810e8b2cded761303a67" host="localhost" Jul 7 06:06:13.750425 containerd[1447]: 2025-07-07 06:06:13.727 [INFO][4446] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:06:13.750425 containerd[1447]: 2025-07-07 06:06:13.727 [INFO][4446] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="07a2fd3ab1b62d5a4be8c25f553a5c97f840cdbf074b810e8b2cded761303a67" HandleID="k8s-pod-network.07a2fd3ab1b62d5a4be8c25f553a5c97f840cdbf074b810e8b2cded761303a67" Workload="localhost-k8s-goldmane--58fd7646b9--6bhqv-eth0" Jul 7 06:06:13.750928 containerd[1447]: 2025-07-07 06:06:13.730 [INFO][4403] cni-plugin/k8s.go 418: Populated endpoint ContainerID="07a2fd3ab1b62d5a4be8c25f553a5c97f840cdbf074b810e8b2cded761303a67" Namespace="calico-system" Pod="goldmane-58fd7646b9-6bhqv" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--6bhqv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--6bhqv-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"66fd4b0e-a42b-41a5-a8e7-b55cecdc3007", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-6bhqv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali56917c25625", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:06:13.750928 containerd[1447]: 2025-07-07 06:06:13.730 [INFO][4403] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="07a2fd3ab1b62d5a4be8c25f553a5c97f840cdbf074b810e8b2cded761303a67" Namespace="calico-system" Pod="goldmane-58fd7646b9-6bhqv" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--6bhqv-eth0" Jul 7 06:06:13.750928 containerd[1447]: 2025-07-07 06:06:13.730 [INFO][4403] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56917c25625 ContainerID="07a2fd3ab1b62d5a4be8c25f553a5c97f840cdbf074b810e8b2cded761303a67" Namespace="calico-system" Pod="goldmane-58fd7646b9-6bhqv" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--6bhqv-eth0" Jul 7 06:06:13.750928 containerd[1447]: 2025-07-07 06:06:13.736 [INFO][4403] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07a2fd3ab1b62d5a4be8c25f553a5c97f840cdbf074b810e8b2cded761303a67" Namespace="calico-system" Pod="goldmane-58fd7646b9-6bhqv" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--6bhqv-eth0" Jul 7 06:06:13.750928 containerd[1447]: 2025-07-07 06:06:13.737 [INFO][4403] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="07a2fd3ab1b62d5a4be8c25f553a5c97f840cdbf074b810e8b2cded761303a67" Namespace="calico-system" Pod="goldmane-58fd7646b9-6bhqv" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--6bhqv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--6bhqv-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"66fd4b0e-a42b-41a5-a8e7-b55cecdc3007", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"07a2fd3ab1b62d5a4be8c25f553a5c97f840cdbf074b810e8b2cded761303a67", Pod:"goldmane-58fd7646b9-6bhqv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali56917c25625", MAC:"3e:44:5d:20:6b:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:06:13.750928 containerd[1447]: 2025-07-07 06:06:13.748 [INFO][4403] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="07a2fd3ab1b62d5a4be8c25f553a5c97f840cdbf074b810e8b2cded761303a67" Namespace="calico-system" Pod="goldmane-58fd7646b9-6bhqv" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--6bhqv-eth0" Jul 7 06:06:13.764165 containerd[1447]: time="2025-07-07T06:06:13.764042069Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:06:13.764165 containerd[1447]: time="2025-07-07T06:06:13.764121274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:06:13.764165 containerd[1447]: time="2025-07-07T06:06:13.764139395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:06:13.764539 containerd[1447]: time="2025-07-07T06:06:13.764232080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:06:13.792509 systemd[1]: Started cri-containerd-07a2fd3ab1b62d5a4be8c25f553a5c97f840cdbf074b810e8b2cded761303a67.scope - libcontainer container 07a2fd3ab1b62d5a4be8c25f553a5c97f840cdbf074b810e8b2cded761303a67. Jul 7 06:06:13.802138 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:06:13.824860 containerd[1447]: time="2025-07-07T06:06:13.824814010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-6bhqv,Uid:66fd4b0e-a42b-41a5-a8e7-b55cecdc3007,Namespace:calico-system,Attempt:1,} returns sandbox id \"07a2fd3ab1b62d5a4be8c25f553a5c97f840cdbf074b810e8b2cded761303a67\"" Jul 7 06:06:13.826343 containerd[1447]: time="2025-07-07T06:06:13.826303212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 06:06:13.839448 systemd-networkd[1379]: calidac1869deb1: Link UP Jul 7 06:06:13.840125 systemd-networkd[1379]: calidac1869deb1: Gained carrier Jul 7 06:06:13.852057 containerd[1447]: 2025-07-07 06:06:13.645 [INFO][4417] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:06:13.852057 containerd[1447]: 2025-07-07 06:06:13.658 [INFO][4417] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--lczkp-eth0 coredns-7c65d6cfc9- kube-system 8ef96388-886e-494c-b484-12fab2731020 916 0 2025-07-07 06:05:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-lczkp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidac1869deb1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e358aacf12b5c9685bf2d531de582ce5dc51c9f1676fb17cc4e2b26e31a634e7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lczkp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lczkp-" Jul 7 06:06:13.852057 containerd[1447]: 2025-07-07 06:06:13.658 [INFO][4417] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e358aacf12b5c9685bf2d531de582ce5dc51c9f1676fb17cc4e2b26e31a634e7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lczkp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lczkp-eth0" Jul 7 06:06:13.852057 containerd[1447]: 2025-07-07 06:06:13.687 [INFO][4453] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e358aacf12b5c9685bf2d531de582ce5dc51c9f1676fb17cc4e2b26e31a634e7" HandleID="k8s-pod-network.e358aacf12b5c9685bf2d531de582ce5dc51c9f1676fb17cc4e2b26e31a634e7" Workload="localhost-k8s-coredns--7c65d6cfc9--lczkp-eth0" Jul 7 06:06:13.852057 containerd[1447]: 2025-07-07 06:06:13.687 [INFO][4453] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e358aacf12b5c9685bf2d531de582ce5dc51c9f1676fb17cc4e2b26e31a634e7" HandleID="k8s-pod-network.e358aacf12b5c9685bf2d531de582ce5dc51c9f1676fb17cc4e2b26e31a634e7" Workload="localhost-k8s-coredns--7c65d6cfc9--lczkp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd1a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-lczkp", "timestamp":"2025-07-07 06:06:13.687485901 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:06:13.852057 containerd[1447]: 2025-07-07 06:06:13.687 [INFO][4453] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:06:13.852057 containerd[1447]: 2025-07-07 06:06:13.727 [INFO][4453] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:06:13.852057 containerd[1447]: 2025-07-07 06:06:13.727 [INFO][4453] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:06:13.852057 containerd[1447]: 2025-07-07 06:06:13.799 [INFO][4453] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e358aacf12b5c9685bf2d531de582ce5dc51c9f1676fb17cc4e2b26e31a634e7" host="localhost" Jul 7 06:06:13.852057 containerd[1447]: 2025-07-07 06:06:13.807 [INFO][4453] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:06:13.852057 containerd[1447]: 2025-07-07 06:06:13.811 [INFO][4453] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:06:13.852057 containerd[1447]: 2025-07-07 06:06:13.814 [INFO][4453] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:06:13.852057 containerd[1447]: 2025-07-07 06:06:13.818 [INFO][4453] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:06:13.852057 containerd[1447]: 2025-07-07 06:06:13.818 [INFO][4453] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e358aacf12b5c9685bf2d531de582ce5dc51c9f1676fb17cc4e2b26e31a634e7" host="localhost" Jul 7 06:06:13.852057 containerd[1447]: 2025-07-07 06:06:13.820 [INFO][4453] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e358aacf12b5c9685bf2d531de582ce5dc51c9f1676fb17cc4e2b26e31a634e7 Jul 7 06:06:13.852057 containerd[1447]: 2025-07-07 06:06:13.824 [INFO][4453] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e358aacf12b5c9685bf2d531de582ce5dc51c9f1676fb17cc4e2b26e31a634e7" host="localhost" Jul 7 06:06:13.852057 containerd[1447]: 2025-07-07 06:06:13.832 [INFO][4453] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.e358aacf12b5c9685bf2d531de582ce5dc51c9f1676fb17cc4e2b26e31a634e7" host="localhost" Jul 7 06:06:13.852057 containerd[1447]: 2025-07-07 06:06:13.832 [INFO][4453] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.e358aacf12b5c9685bf2d531de582ce5dc51c9f1676fb17cc4e2b26e31a634e7" host="localhost" Jul 7 06:06:13.852057 containerd[1447]: 2025-07-07 06:06:13.832 [INFO][4453] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:06:13.852057 containerd[1447]: 2025-07-07 06:06:13.832 [INFO][4453] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="e358aacf12b5c9685bf2d531de582ce5dc51c9f1676fb17cc4e2b26e31a634e7" HandleID="k8s-pod-network.e358aacf12b5c9685bf2d531de582ce5dc51c9f1676fb17cc4e2b26e31a634e7" Workload="localhost-k8s-coredns--7c65d6cfc9--lczkp-eth0" Jul 7 06:06:13.852592 containerd[1447]: 2025-07-07 06:06:13.834 [INFO][4417] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e358aacf12b5c9685bf2d531de582ce5dc51c9f1676fb17cc4e2b26e31a634e7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lczkp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lczkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--lczkp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8ef96388-886e-494c-b484-12fab2731020", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-lczkp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidac1869deb1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:06:13.852592 containerd[1447]: 2025-07-07 06:06:13.835 [INFO][4417] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="e358aacf12b5c9685bf2d531de582ce5dc51c9f1676fb17cc4e2b26e31a634e7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lczkp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lczkp-eth0" Jul 7 06:06:13.852592 containerd[1447]: 2025-07-07 06:06:13.835 [INFO][4417] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidac1869deb1 ContainerID="e358aacf12b5c9685bf2d531de582ce5dc51c9f1676fb17cc4e2b26e31a634e7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lczkp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lczkp-eth0" Jul 7 06:06:13.852592 containerd[1447]: 2025-07-07 06:06:13.837 [INFO][4417] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e358aacf12b5c9685bf2d531de582ce5dc51c9f1676fb17cc4e2b26e31a634e7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lczkp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lczkp-eth0" Jul 7 06:06:13.852592 containerd[1447]: 2025-07-07 06:06:13.840 [INFO][4417] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e358aacf12b5c9685bf2d531de582ce5dc51c9f1676fb17cc4e2b26e31a634e7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lczkp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lczkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--lczkp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8ef96388-886e-494c-b484-12fab2731020", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e358aacf12b5c9685bf2d531de582ce5dc51c9f1676fb17cc4e2b26e31a634e7", Pod:"coredns-7c65d6cfc9-lczkp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidac1869deb1", MAC:"42:f0:4c:8a:b0:11", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:06:13.852592 containerd[1447]: 2025-07-07 06:06:13.849 [INFO][4417] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e358aacf12b5c9685bf2d531de582ce5dc51c9f1676fb17cc4e2b26e31a634e7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lczkp" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lczkp-eth0" Jul 7 06:06:13.869377 containerd[1447]: time="2025-07-07T06:06:13.869122606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:06:13.869377 containerd[1447]: time="2025-07-07T06:06:13.869183489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:06:13.869377 containerd[1447]: time="2025-07-07T06:06:13.869194570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:06:13.869377 containerd[1447]: time="2025-07-07T06:06:13.869268454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:06:13.882486 systemd[1]: Started cri-containerd-e358aacf12b5c9685bf2d531de582ce5dc51c9f1676fb17cc4e2b26e31a634e7.scope - libcontainer container e358aacf12b5c9685bf2d531de582ce5dc51c9f1676fb17cc4e2b26e31a634e7. Jul 7 06:06:13.892879 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:06:13.912259 containerd[1447]: time="2025-07-07T06:06:13.912155532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lczkp,Uid:8ef96388-886e-494c-b484-12fab2731020,Namespace:kube-system,Attempt:1,} returns sandbox id \"e358aacf12b5c9685bf2d531de582ce5dc51c9f1676fb17cc4e2b26e31a634e7\"" Jul 7 06:06:13.917003 kubelet[2474]: E0707 06:06:13.916980 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:06:13.920363 containerd[1447]: time="2025-07-07T06:06:13.920306660Z" level=info msg="CreateContainer within sandbox \"e358aacf12b5c9685bf2d531de582ce5dc51c9f1676fb17cc4e2b26e31a634e7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:06:13.936858 systemd-networkd[1379]: cali91441370f95: Link UP Jul 7 06:06:13.937002 systemd-networkd[1379]: cali91441370f95: Gained carrier Jul 7 06:06:13.941382 containerd[1447]: time="2025-07-07T06:06:13.940488849Z" level=info msg="CreateContainer within sandbox \"e358aacf12b5c9685bf2d531de582ce5dc51c9f1676fb17cc4e2b26e31a634e7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7e32e81b447b5f2464dbe0a1bfae9bbcf10f3d342a791ea4c4dc8961437fee9a\"" Jul 7 06:06:13.945746 containerd[1447]: time="2025-07-07T06:06:13.943496095Z" level=info msg="StartContainer for \"7e32e81b447b5f2464dbe0a1bfae9bbcf10f3d342a791ea4c4dc8961437fee9a\"" Jul 7 06:06:13.949162 containerd[1447]: 2025-07-07 06:06:13.637 [INFO][4393] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:06:13.949162 containerd[1447]: 2025-07-07 06:06:13.654 [INFO][4393] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--4gfqc-eth0 coredns-7c65d6cfc9- kube-system cac05d19-02ca-4bd4-9a83-2d4df21aa5b9 917 0 2025-07-07 06:05:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-4gfqc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali91441370f95 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="17f37cec86c64a5cc7224d4e4578ebabf7a11090f2cc100e8759ae5fbe3ace52" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4gfqc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4gfqc-" Jul 7 06:06:13.949162 containerd[1447]: 2025-07-07 06:06:13.654 [INFO][4393] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="17f37cec86c64a5cc7224d4e4578ebabf7a11090f2cc100e8759ae5fbe3ace52" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4gfqc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4gfqc-eth0" Jul 7 06:06:13.949162 containerd[1447]: 2025-07-07 06:06:13.698 [INFO][4450] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="17f37cec86c64a5cc7224d4e4578ebabf7a11090f2cc100e8759ae5fbe3ace52" HandleID="k8s-pod-network.17f37cec86c64a5cc7224d4e4578ebabf7a11090f2cc100e8759ae5fbe3ace52" Workload="localhost-k8s-coredns--7c65d6cfc9--4gfqc-eth0" Jul 7 06:06:13.949162 containerd[1447]: 2025-07-07 06:06:13.698 [INFO][4450] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="17f37cec86c64a5cc7224d4e4578ebabf7a11090f2cc100e8759ae5fbe3ace52" HandleID="k8s-pod-network.17f37cec86c64a5cc7224d4e4578ebabf7a11090f2cc100e8759ae5fbe3ace52" Workload="localhost-k8s-coredns--7c65d6cfc9--4gfqc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a2e30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-4gfqc", "timestamp":"2025-07-07 06:06:13.698780522 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:06:13.949162 containerd[1447]: 2025-07-07 06:06:13.698 [INFO][4450] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:06:13.949162 containerd[1447]: 2025-07-07 06:06:13.833 [INFO][4450] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:06:13.949162 containerd[1447]: 2025-07-07 06:06:13.833 [INFO][4450] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:06:13.949162 containerd[1447]: 2025-07-07 06:06:13.901 [INFO][4450] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.17f37cec86c64a5cc7224d4e4578ebabf7a11090f2cc100e8759ae5fbe3ace52" host="localhost" Jul 7 06:06:13.949162 containerd[1447]: 2025-07-07 06:06:13.909 [INFO][4450] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:06:13.949162 containerd[1447]: 2025-07-07 06:06:13.914 [INFO][4450] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:06:13.949162 containerd[1447]: 2025-07-07 06:06:13.916 [INFO][4450] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:06:13.949162 containerd[1447]: 2025-07-07 06:06:13.919 [INFO][4450] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:06:13.949162 containerd[1447]: 2025-07-07 06:06:13.919 [INFO][4450] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.17f37cec86c64a5cc7224d4e4578ebabf7a11090f2cc100e8759ae5fbe3ace52" host="localhost" Jul 7 06:06:13.949162 containerd[1447]: 2025-07-07 06:06:13.921 [INFO][4450] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.17f37cec86c64a5cc7224d4e4578ebabf7a11090f2cc100e8759ae5fbe3ace52 Jul 7 06:06:13.949162 containerd[1447]: 2025-07-07 06:06:13.925 [INFO][4450] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.17f37cec86c64a5cc7224d4e4578ebabf7a11090f2cc100e8759ae5fbe3ace52" host="localhost" Jul 7 06:06:13.949162 containerd[1447]: 2025-07-07 06:06:13.930 [INFO][4450] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.17f37cec86c64a5cc7224d4e4578ebabf7a11090f2cc100e8759ae5fbe3ace52" host="localhost" Jul 7 06:06:13.949162 containerd[1447]: 2025-07-07 06:06:13.930 [INFO][4450] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.17f37cec86c64a5cc7224d4e4578ebabf7a11090f2cc100e8759ae5fbe3ace52" host="localhost" Jul 7 06:06:13.949162 containerd[1447]: 2025-07-07 06:06:13.930 [INFO][4450] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:06:13.949162 containerd[1447]: 2025-07-07 06:06:13.930 [INFO][4450] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="17f37cec86c64a5cc7224d4e4578ebabf7a11090f2cc100e8759ae5fbe3ace52" HandleID="k8s-pod-network.17f37cec86c64a5cc7224d4e4578ebabf7a11090f2cc100e8759ae5fbe3ace52" Workload="localhost-k8s-coredns--7c65d6cfc9--4gfqc-eth0" Jul 7 06:06:13.949710 containerd[1447]: 2025-07-07 06:06:13.935 [INFO][4393] cni-plugin/k8s.go 418: Populated endpoint ContainerID="17f37cec86c64a5cc7224d4e4578ebabf7a11090f2cc100e8759ae5fbe3ace52" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4gfqc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4gfqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--4gfqc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cac05d19-02ca-4bd4-9a83-2d4df21aa5b9", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-4gfqc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali91441370f95", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:06:13.949710 containerd[1447]: 2025-07-07 06:06:13.935 [INFO][4393] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="17f37cec86c64a5cc7224d4e4578ebabf7a11090f2cc100e8759ae5fbe3ace52" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4gfqc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4gfqc-eth0" Jul 7 06:06:13.949710 containerd[1447]: 2025-07-07 06:06:13.935 [INFO][4393] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali91441370f95 ContainerID="17f37cec86c64a5cc7224d4e4578ebabf7a11090f2cc100e8759ae5fbe3ace52" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4gfqc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4gfqc-eth0" Jul 7 06:06:13.949710 containerd[1447]: 2025-07-07 06:06:13.937 [INFO][4393] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="17f37cec86c64a5cc7224d4e4578ebabf7a11090f2cc100e8759ae5fbe3ace52" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4gfqc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4gfqc-eth0" Jul 7 06:06:13.949710 containerd[1447]: 2025-07-07 06:06:13.937 [INFO][4393] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="17f37cec86c64a5cc7224d4e4578ebabf7a11090f2cc100e8759ae5fbe3ace52" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4gfqc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4gfqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--4gfqc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cac05d19-02ca-4bd4-9a83-2d4df21aa5b9", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"17f37cec86c64a5cc7224d4e4578ebabf7a11090f2cc100e8759ae5fbe3ace52", Pod:"coredns-7c65d6cfc9-4gfqc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali91441370f95", MAC:"8a:e4:32:23:d0:81", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:06:13.949710 containerd[1447]: 2025-07-07 06:06:13.947 [INFO][4393] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="17f37cec86c64a5cc7224d4e4578ebabf7a11090f2cc100e8759ae5fbe3ace52" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4gfqc" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4gfqc-eth0" Jul 7 06:06:13.965352 containerd[1447]: time="2025-07-07T06:06:13.965224849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:06:13.965352 containerd[1447]: time="2025-07-07T06:06:13.965280572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:06:13.965352 containerd[1447]: time="2025-07-07T06:06:13.965296653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:06:13.965551 containerd[1447]: time="2025-07-07T06:06:13.965436541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:06:13.975507 systemd[1]: Started cri-containerd-7e32e81b447b5f2464dbe0a1bfae9bbcf10f3d342a791ea4c4dc8961437fee9a.scope - libcontainer container 7e32e81b447b5f2464dbe0a1bfae9bbcf10f3d342a791ea4c4dc8961437fee9a. Jul 7 06:06:13.979046 systemd[1]: Started cri-containerd-17f37cec86c64a5cc7224d4e4578ebabf7a11090f2cc100e8759ae5fbe3ace52.scope - libcontainer container 17f37cec86c64a5cc7224d4e4578ebabf7a11090f2cc100e8759ae5fbe3ace52. Jul 7 06:06:13.993083 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:06:14.001849 containerd[1447]: time="2025-07-07T06:06:14.001816539Z" level=info msg="StartContainer for \"7e32e81b447b5f2464dbe0a1bfae9bbcf10f3d342a791ea4c4dc8961437fee9a\" returns successfully" Jul 7 06:06:14.019222 containerd[1447]: time="2025-07-07T06:06:14.019187068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4gfqc,Uid:cac05d19-02ca-4bd4-9a83-2d4df21aa5b9,Namespace:kube-system,Attempt:1,} returns sandbox id \"17f37cec86c64a5cc7224d4e4578ebabf7a11090f2cc100e8759ae5fbe3ace52\"" Jul 7 06:06:14.021378 kubelet[2474]: E0707 06:06:14.021349 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:06:14.026385 containerd[1447]: time="2025-07-07T06:06:14.025846544Z" level=info msg="CreateContainer within sandbox \"17f37cec86c64a5cc7224d4e4578ebabf7a11090f2cc100e8759ae5fbe3ace52\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:06:14.039891 containerd[1447]: time="2025-07-07T06:06:14.039847653Z" level=info msg="CreateContainer within sandbox \"17f37cec86c64a5cc7224d4e4578ebabf7a11090f2cc100e8759ae5fbe3ace52\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"00e1e188419187a86a423a0ba562f3fe02e418acbab6283ea5341e578d09b781\"" Jul 7 06:06:14.040855 containerd[1447]: time="2025-07-07T06:06:14.040500968Z" level=info msg="StartContainer for \"00e1e188419187a86a423a0ba562f3fe02e418acbab6283ea5341e578d09b781\"" Jul 7 06:06:14.047117 systemd-networkd[1379]: cali09a41d31e60: Link UP Jul 7 06:06:14.048752 systemd-networkd[1379]: cali09a41d31e60: Gained carrier Jul 7 06:06:14.066133 containerd[1447]: 2025-07-07 06:06:13.683 [INFO][4433] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:06:14.066133 containerd[1447]: 2025-07-07 06:06:13.703 [INFO][4433] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--56bdbd79df--bkhbd-eth0 calico-apiserver-56bdbd79df- calico-apiserver 528c9db9-c113-4e11-bd08-112e227a85e1 918 0 2025-07-07 06:05:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:56bdbd79df projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-56bdbd79df-bkhbd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali09a41d31e60 [] [] }} ContainerID="0157ce4012e3f8f72a772423f235c54a051c65f14284462d27bccfae553bfad9" Namespace="calico-apiserver" Pod="calico-apiserver-56bdbd79df-bkhbd" WorkloadEndpoint="localhost-k8s-calico--apiserver--56bdbd79df--bkhbd-" Jul 7 06:06:14.066133 containerd[1447]: 2025-07-07 06:06:13.703 [INFO][4433] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0157ce4012e3f8f72a772423f235c54a051c65f14284462d27bccfae553bfad9" Namespace="calico-apiserver" Pod="calico-apiserver-56bdbd79df-bkhbd" WorkloadEndpoint="localhost-k8s-calico--apiserver--56bdbd79df--bkhbd-eth0" Jul 7 06:06:14.066133 containerd[1447]: 2025-07-07 06:06:13.731 [INFO][4474] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0157ce4012e3f8f72a772423f235c54a051c65f14284462d27bccfae553bfad9" HandleID="k8s-pod-network.0157ce4012e3f8f72a772423f235c54a051c65f14284462d27bccfae553bfad9" Workload="localhost-k8s-calico--apiserver--56bdbd79df--bkhbd-eth0" Jul 7 06:06:14.066133 containerd[1447]: 2025-07-07 06:06:13.732 [INFO][4474] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0157ce4012e3f8f72a772423f235c54a051c65f14284462d27bccfae553bfad9" HandleID="k8s-pod-network.0157ce4012e3f8f72a772423f235c54a051c65f14284462d27bccfae553bfad9" Workload="localhost-k8s-calico--apiserver--56bdbd79df--bkhbd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d5650), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-56bdbd79df-bkhbd", "timestamp":"2025-07-07 06:06:13.731944025 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:06:14.066133 containerd[1447]: 2025-07-07 06:06:13.732 [INFO][4474] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:06:14.066133 containerd[1447]: 2025-07-07 06:06:13.930 [INFO][4474] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:06:14.066133 containerd[1447]: 2025-07-07 06:06:13.930 [INFO][4474] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:06:14.066133 containerd[1447]: 2025-07-07 06:06:14.003 [INFO][4474] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0157ce4012e3f8f72a772423f235c54a051c65f14284462d27bccfae553bfad9" host="localhost" Jul 7 06:06:14.066133 containerd[1447]: 2025-07-07 06:06:14.010 [INFO][4474] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:06:14.066133 containerd[1447]: 2025-07-07 06:06:14.015 [INFO][4474] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:06:14.066133 containerd[1447]: 2025-07-07 06:06:14.016 [INFO][4474] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:06:14.066133 containerd[1447]: 2025-07-07 06:06:14.019 [INFO][4474] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:06:14.066133 containerd[1447]: 2025-07-07 06:06:14.019 [INFO][4474] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0157ce4012e3f8f72a772423f235c54a051c65f14284462d27bccfae553bfad9" host="localhost" Jul 7 06:06:14.066133 containerd[1447]: 2025-07-07 06:06:14.021 [INFO][4474] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0157ce4012e3f8f72a772423f235c54a051c65f14284462d27bccfae553bfad9 Jul 7 06:06:14.066133 containerd[1447]: 2025-07-07 06:06:14.027 [INFO][4474] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0157ce4012e3f8f72a772423f235c54a051c65f14284462d27bccfae553bfad9" host="localhost" Jul 7 06:06:14.066133 containerd[1447]: 2025-07-07 06:06:14.035 [INFO][4474] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.0157ce4012e3f8f72a772423f235c54a051c65f14284462d27bccfae553bfad9" host="localhost" Jul 7 06:06:14.066133 containerd[1447]: 2025-07-07 06:06:14.035 [INFO][4474] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.0157ce4012e3f8f72a772423f235c54a051c65f14284462d27bccfae553bfad9" host="localhost" Jul 7 06:06:14.066133 containerd[1447]: 2025-07-07 06:06:14.035 [INFO][4474] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:06:14.066133 containerd[1447]: 2025-07-07 06:06:14.035 [INFO][4474] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="0157ce4012e3f8f72a772423f235c54a051c65f14284462d27bccfae553bfad9" HandleID="k8s-pod-network.0157ce4012e3f8f72a772423f235c54a051c65f14284462d27bccfae553bfad9" Workload="localhost-k8s-calico--apiserver--56bdbd79df--bkhbd-eth0" Jul 7 06:06:14.066811 containerd[1447]: 2025-07-07 06:06:14.040 [INFO][4433] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0157ce4012e3f8f72a772423f235c54a051c65f14284462d27bccfae553bfad9" Namespace="calico-apiserver" Pod="calico-apiserver-56bdbd79df-bkhbd" WorkloadEndpoint="localhost-k8s-calico--apiserver--56bdbd79df--bkhbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56bdbd79df--bkhbd-eth0", GenerateName:"calico-apiserver-56bdbd79df-", Namespace:"calico-apiserver", SelfLink:"", UID:"528c9db9-c113-4e11-bd08-112e227a85e1", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56bdbd79df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-56bdbd79df-bkhbd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali09a41d31e60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:06:14.066811 containerd[1447]: 2025-07-07 06:06:14.040 [INFO][4433] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="0157ce4012e3f8f72a772423f235c54a051c65f14284462d27bccfae553bfad9" Namespace="calico-apiserver" Pod="calico-apiserver-56bdbd79df-bkhbd" WorkloadEndpoint="localhost-k8s-calico--apiserver--56bdbd79df--bkhbd-eth0" Jul 7 06:06:14.066811 containerd[1447]: 2025-07-07 06:06:14.040 [INFO][4433] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09a41d31e60 ContainerID="0157ce4012e3f8f72a772423f235c54a051c65f14284462d27bccfae553bfad9" Namespace="calico-apiserver" Pod="calico-apiserver-56bdbd79df-bkhbd" WorkloadEndpoint="localhost-k8s-calico--apiserver--56bdbd79df--bkhbd-eth0" Jul 7 06:06:14.066811 containerd[1447]: 2025-07-07 06:06:14.049 [INFO][4433] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0157ce4012e3f8f72a772423f235c54a051c65f14284462d27bccfae553bfad9" Namespace="calico-apiserver" Pod="calico-apiserver-56bdbd79df-bkhbd" WorkloadEndpoint="localhost-k8s-calico--apiserver--56bdbd79df--bkhbd-eth0" Jul 7 06:06:14.066811 containerd[1447]: 2025-07-07 06:06:14.051 [INFO][4433] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0157ce4012e3f8f72a772423f235c54a051c65f14284462d27bccfae553bfad9" Namespace="calico-apiserver" Pod="calico-apiserver-56bdbd79df-bkhbd" WorkloadEndpoint="localhost-k8s-calico--apiserver--56bdbd79df--bkhbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56bdbd79df--bkhbd-eth0", GenerateName:"calico-apiserver-56bdbd79df-", Namespace:"calico-apiserver", SelfLink:"", UID:"528c9db9-c113-4e11-bd08-112e227a85e1", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56bdbd79df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0157ce4012e3f8f72a772423f235c54a051c65f14284462d27bccfae553bfad9", Pod:"calico-apiserver-56bdbd79df-bkhbd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali09a41d31e60", MAC:"c2:b9:ce:07:cb:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:06:14.066811 containerd[1447]: 2025-07-07 06:06:14.063 [INFO][4433] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0157ce4012e3f8f72a772423f235c54a051c65f14284462d27bccfae553bfad9" Namespace="calico-apiserver" Pod="calico-apiserver-56bdbd79df-bkhbd" WorkloadEndpoint="localhost-k8s-calico--apiserver--56bdbd79df--bkhbd-eth0" Jul 7 06:06:14.071593 systemd[1]: Started cri-containerd-00e1e188419187a86a423a0ba562f3fe02e418acbab6283ea5341e578d09b781.scope - libcontainer container 00e1e188419187a86a423a0ba562f3fe02e418acbab6283ea5341e578d09b781. Jul 7 06:06:14.104353 containerd[1447]: time="2025-07-07T06:06:14.104272057Z" level=info msg="StartContainer for \"00e1e188419187a86a423a0ba562f3fe02e418acbab6283ea5341e578d09b781\" returns successfully" Jul 7 06:06:14.107414 containerd[1447]: time="2025-07-07T06:06:14.106854955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:06:14.107414 containerd[1447]: time="2025-07-07T06:06:14.107042165Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:06:14.107559 containerd[1447]: time="2025-07-07T06:06:14.107423026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:06:14.107744 containerd[1447]: time="2025-07-07T06:06:14.107572714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:06:14.139567 systemd[1]: Started cri-containerd-0157ce4012e3f8f72a772423f235c54a051c65f14284462d27bccfae553bfad9.scope - libcontainer container 0157ce4012e3f8f72a772423f235c54a051c65f14284462d27bccfae553bfad9. Jul 7 06:06:14.162998 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:06:14.192061 containerd[1447]: time="2025-07-07T06:06:14.192009268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56bdbd79df-bkhbd,Uid:528c9db9-c113-4e11-bd08-112e227a85e1,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0157ce4012e3f8f72a772423f235c54a051c65f14284462d27bccfae553bfad9\"" Jul 7 06:06:14.584658 systemd[1]: run-netns-cni\x2d1d98e045\x2d639f\x2df687\x2d8529\x2d40917b013c09.mount: Deactivated successfully. Jul 7 06:06:14.584746 systemd[1]: run-netns-cni\x2d187abb75\x2dbe31\x2d3f83\x2df322\x2d9d5381d7d87d.mount: Deactivated successfully. Jul 7 06:06:14.662728 kubelet[2474]: E0707 06:06:14.662130 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:06:14.666897 kubelet[2474]: E0707 06:06:14.666857 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:06:14.676817 kubelet[2474]: I0707 06:06:14.676762 2474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-lczkp" podStartSLOduration=33.676748065 podStartE2EDuration="33.676748065s" podCreationTimestamp="2025-07-07 06:05:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:06:14.674669713 +0000 UTC m=+39.289166969" watchObservedRunningTime="2025-07-07 06:06:14.676748065 +0000 UTC m=+39.291245321" Jul 7 06:06:14.710798 kubelet[2474]: I0707 06:06:14.710575 2474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-4gfqc" podStartSLOduration=33.710558112 podStartE2EDuration="33.710558112s" podCreationTimestamp="2025-07-07 06:05:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:06:14.691086391 +0000 UTC m=+39.305583647" watchObservedRunningTime="2025-07-07 06:06:14.710558112 +0000 UTC m=+39.325055368" Jul 7 06:06:15.138436 systemd-networkd[1379]: cali09a41d31e60: Gained IPv6LL Jul 7 06:06:15.261598 systemd[1]: Started sshd@7-10.0.0.91:22-10.0.0.1:47816.service - OpenSSH per-connection server daemon (10.0.0.1:47816). Jul 7 06:06:15.265543 systemd-networkd[1379]: cali91441370f95: Gained IPv6LL Jul 7 06:06:15.270581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4249785005.mount: Deactivated successfully. Jul 7 06:06:15.326451 sshd[4788]: Accepted publickey for core from 10.0.0.1 port 47816 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:06:15.328969 sshd[4788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:06:15.333926 systemd-logind[1421]: New session 8 of user core. Jul 7 06:06:15.342819 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 06:06:15.613617 sshd[4788]: pam_unix(sshd:session): session closed for user core Jul 7 06:06:15.617539 systemd[1]: sshd@7-10.0.0.91:22-10.0.0.1:47816.service: Deactivated successfully. Jul 7 06:06:15.619800 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 06:06:15.620418 systemd-logind[1421]: Session 8 logged out. Waiting for processes to exit. Jul 7 06:06:15.621847 systemd-logind[1421]: Removed session 8. Jul 7 06:06:15.649510 systemd-networkd[1379]: cali56917c25625: Gained IPv6LL Jul 7 06:06:15.712294 kubelet[2474]: E0707 06:06:15.712250 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:06:15.712686 kubelet[2474]: E0707 06:06:15.712307 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:06:15.767096 containerd[1447]: time="2025-07-07T06:06:15.767047272Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:15.767549 containerd[1447]: time="2025-07-07T06:06:15.767511697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 7 06:06:15.769070 containerd[1447]: time="2025-07-07T06:06:15.769031816Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:15.771090 containerd[1447]: time="2025-07-07T06:06:15.771060201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:15.772063 containerd[1447]: time="2025-07-07T06:06:15.772028412Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 1.945680797s" Jul 7 06:06:15.772132 containerd[1447]: time="2025-07-07T06:06:15.772064334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 7 06:06:15.773613 containerd[1447]: time="2025-07-07T06:06:15.773581573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 06:06:15.774739 containerd[1447]: time="2025-07-07T06:06:15.774716632Z" level=info msg="CreateContainer within sandbox \"07a2fd3ab1b62d5a4be8c25f553a5c97f840cdbf074b810e8b2cded761303a67\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 06:06:15.777610 systemd-networkd[1379]: calidac1869deb1: Gained IPv6LL Jul 7 06:06:15.787859 containerd[1447]: time="2025-07-07T06:06:15.787798032Z" level=info msg="CreateContainer within sandbox \"07a2fd3ab1b62d5a4be8c25f553a5c97f840cdbf074b810e8b2cded761303a67\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"13324de528caacd565b438a9d0be203307fa166a9e14f64eb97074e31074b3cb\"" Jul 7 06:06:15.789240 containerd[1447]: time="2025-07-07T06:06:15.788442946Z" level=info msg="StartContainer for \"13324de528caacd565b438a9d0be203307fa166a9e14f64eb97074e31074b3cb\"" Jul 7 06:06:15.818507 systemd[1]: Started cri-containerd-13324de528caacd565b438a9d0be203307fa166a9e14f64eb97074e31074b3cb.scope - libcontainer container 13324de528caacd565b438a9d0be203307fa166a9e14f64eb97074e31074b3cb. Jul 7 06:06:15.861039 containerd[1447]: time="2025-07-07T06:06:15.861000123Z" level=info msg="StartContainer for \"13324de528caacd565b438a9d0be203307fa166a9e14f64eb97074e31074b3cb\" returns successfully" Jul 7 06:06:16.465542 containerd[1447]: time="2025-07-07T06:06:16.465414766Z" level=info msg="StopPodSandbox for \"5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40\"" Jul 7 06:06:16.545552 containerd[1447]: 2025-07-07 06:06:16.506 [INFO][4898] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" Jul 7 06:06:16.545552 containerd[1447]: 2025-07-07 06:06:16.508 [INFO][4898] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" iface="eth0" netns="/var/run/netns/cni-0ead1ae9-5ba2-c466-dd25-2e35e73fb102" Jul 7 06:06:16.545552 containerd[1447]: 2025-07-07 06:06:16.508 [INFO][4898] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" iface="eth0" netns="/var/run/netns/cni-0ead1ae9-5ba2-c466-dd25-2e35e73fb102" Jul 7 06:06:16.545552 containerd[1447]: 2025-07-07 06:06:16.508 [INFO][4898] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" iface="eth0" netns="/var/run/netns/cni-0ead1ae9-5ba2-c466-dd25-2e35e73fb102" Jul 7 06:06:16.545552 containerd[1447]: 2025-07-07 06:06:16.508 [INFO][4898] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" Jul 7 06:06:16.545552 containerd[1447]: 2025-07-07 06:06:16.508 [INFO][4898] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" Jul 7 06:06:16.545552 containerd[1447]: 2025-07-07 06:06:16.528 [INFO][4907] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" HandleID="k8s-pod-network.5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" Workload="localhost-k8s-calico--kube--controllers--7fd865784c--rm28g-eth0" Jul 7 06:06:16.545552 containerd[1447]: 2025-07-07 06:06:16.528 [INFO][4907] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:06:16.545552 containerd[1447]: 2025-07-07 06:06:16.528 [INFO][4907] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:06:16.545552 containerd[1447]: 2025-07-07 06:06:16.537 [WARNING][4907] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" HandleID="k8s-pod-network.5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" Workload="localhost-k8s-calico--kube--controllers--7fd865784c--rm28g-eth0" Jul 7 06:06:16.545552 containerd[1447]: 2025-07-07 06:06:16.538 [INFO][4907] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" HandleID="k8s-pod-network.5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" Workload="localhost-k8s-calico--kube--controllers--7fd865784c--rm28g-eth0" Jul 7 06:06:16.545552 containerd[1447]: 2025-07-07 06:06:16.541 [INFO][4907] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:06:16.545552 containerd[1447]: 2025-07-07 06:06:16.543 [INFO][4898] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" Jul 7 06:06:16.546058 containerd[1447]: time="2025-07-07T06:06:16.545661396Z" level=info msg="TearDown network for sandbox \"5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40\" successfully" Jul 7 06:06:16.546058 containerd[1447]: time="2025-07-07T06:06:16.545692318Z" level=info msg="StopPodSandbox for \"5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40\" returns successfully" Jul 7 06:06:16.546331 containerd[1447]: time="2025-07-07T06:06:16.546292468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fd865784c-rm28g,Uid:18c7c67e-01e0-40ea-8d99-ba460eb1fde4,Namespace:calico-system,Attempt:1,}" Jul 7 06:06:16.548582 systemd[1]: run-netns-cni\x2d0ead1ae9\x2d5ba2\x2dc466\x2ddd25\x2d2e35e73fb102.mount: Deactivated successfully. Jul 7 06:06:16.700819 systemd-networkd[1379]: cali7d41e44cdeb: Link UP Jul 7 06:06:16.701032 systemd-networkd[1379]: cali7d41e44cdeb: Gained carrier Jul 7 06:06:16.718163 containerd[1447]: 2025-07-07 06:06:16.578 [INFO][4928] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:06:16.718163 containerd[1447]: 2025-07-07 06:06:16.596 [INFO][4928] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7fd865784c--rm28g-eth0 calico-kube-controllers-7fd865784c- calico-system 18c7c67e-01e0-40ea-8d99-ba460eb1fde4 1009 0 2025-07-07 06:05:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7fd865784c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7fd865784c-rm28g eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7d41e44cdeb [] [] }} ContainerID="ce1bb968a4656baf7cb11ba761f8e71d36fae6a422a2de3bf98e0b6d8a4cf8e0" Namespace="calico-system" Pod="calico-kube-controllers-7fd865784c-rm28g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fd865784c--rm28g-" Jul 7 06:06:16.718163 containerd[1447]: 2025-07-07 06:06:16.596 [INFO][4928] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ce1bb968a4656baf7cb11ba761f8e71d36fae6a422a2de3bf98e0b6d8a4cf8e0" Namespace="calico-system" Pod="calico-kube-controllers-7fd865784c-rm28g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fd865784c--rm28g-eth0" Jul 7 06:06:16.718163 containerd[1447]: 2025-07-07 06:06:16.642 [INFO][4944] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce1bb968a4656baf7cb11ba761f8e71d36fae6a422a2de3bf98e0b6d8a4cf8e0" HandleID="k8s-pod-network.ce1bb968a4656baf7cb11ba761f8e71d36fae6a422a2de3bf98e0b6d8a4cf8e0" Workload="localhost-k8s-calico--kube--controllers--7fd865784c--rm28g-eth0" Jul 7 06:06:16.718163 containerd[1447]: 2025-07-07 06:06:16.642 [INFO][4944] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ce1bb968a4656baf7cb11ba761f8e71d36fae6a422a2de3bf98e0b6d8a4cf8e0" HandleID="k8s-pod-network.ce1bb968a4656baf7cb11ba761f8e71d36fae6a422a2de3bf98e0b6d8a4cf8e0" Workload="localhost-k8s-calico--kube--controllers--7fd865784c--rm28g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dbd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7fd865784c-rm28g", "timestamp":"2025-07-07 06:06:16.642055006 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:06:16.718163 containerd[1447]: 2025-07-07 06:06:16.642 [INFO][4944] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:06:16.718163 containerd[1447]: 2025-07-07 06:06:16.642 [INFO][4944] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:06:16.718163 containerd[1447]: 2025-07-07 06:06:16.642 [INFO][4944] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:06:16.718163 containerd[1447]: 2025-07-07 06:06:16.654 [INFO][4944] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ce1bb968a4656baf7cb11ba761f8e71d36fae6a422a2de3bf98e0b6d8a4cf8e0" host="localhost" Jul 7 06:06:16.718163 containerd[1447]: 2025-07-07 06:06:16.662 [INFO][4944] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:06:16.718163 containerd[1447]: 2025-07-07 06:06:16.669 [INFO][4944] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:06:16.718163 containerd[1447]: 2025-07-07 06:06:16.673 [INFO][4944] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:06:16.718163 containerd[1447]: 2025-07-07 06:06:16.677 [INFO][4944] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:06:16.718163 containerd[1447]: 2025-07-07 06:06:16.677 [INFO][4944] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ce1bb968a4656baf7cb11ba761f8e71d36fae6a422a2de3bf98e0b6d8a4cf8e0" host="localhost" Jul 7 06:06:16.718163 containerd[1447]: 2025-07-07 06:06:16.681 [INFO][4944] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ce1bb968a4656baf7cb11ba761f8e71d36fae6a422a2de3bf98e0b6d8a4cf8e0 Jul 7 06:06:16.718163 containerd[1447]: 2025-07-07 06:06:16.685 [INFO][4944] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ce1bb968a4656baf7cb11ba761f8e71d36fae6a422a2de3bf98e0b6d8a4cf8e0" host="localhost" Jul 7 06:06:16.718163 containerd[1447]: 2025-07-07 06:06:16.693 [INFO][4944] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ce1bb968a4656baf7cb11ba761f8e71d36fae6a422a2de3bf98e0b6d8a4cf8e0" host="localhost" Jul 7 06:06:16.718163 containerd[1447]: 2025-07-07 06:06:16.693 [INFO][4944] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ce1bb968a4656baf7cb11ba761f8e71d36fae6a422a2de3bf98e0b6d8a4cf8e0" host="localhost" Jul 7 06:06:16.718163 containerd[1447]: 2025-07-07 06:06:16.693 [INFO][4944] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:06:16.718163 containerd[1447]: 2025-07-07 06:06:16.693 [INFO][4944] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ce1bb968a4656baf7cb11ba761f8e71d36fae6a422a2de3bf98e0b6d8a4cf8e0" HandleID="k8s-pod-network.ce1bb968a4656baf7cb11ba761f8e71d36fae6a422a2de3bf98e0b6d8a4cf8e0" Workload="localhost-k8s-calico--kube--controllers--7fd865784c--rm28g-eth0" Jul 7 06:06:16.718744 containerd[1447]: 2025-07-07 06:06:16.696 [INFO][4928] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ce1bb968a4656baf7cb11ba761f8e71d36fae6a422a2de3bf98e0b6d8a4cf8e0" Namespace="calico-system" Pod="calico-kube-controllers-7fd865784c-rm28g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fd865784c--rm28g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7fd865784c--rm28g-eth0", GenerateName:"calico-kube-controllers-7fd865784c-", Namespace:"calico-system", SelfLink:"", UID:"18c7c67e-01e0-40ea-8d99-ba460eb1fde4", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fd865784c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7fd865784c-rm28g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7d41e44cdeb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:06:16.718744 containerd[1447]: 2025-07-07 06:06:16.696 [INFO][4928] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ce1bb968a4656baf7cb11ba761f8e71d36fae6a422a2de3bf98e0b6d8a4cf8e0" Namespace="calico-system" Pod="calico-kube-controllers-7fd865784c-rm28g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fd865784c--rm28g-eth0" Jul 7 06:06:16.718744 containerd[1447]: 2025-07-07 06:06:16.696 [INFO][4928] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7d41e44cdeb ContainerID="ce1bb968a4656baf7cb11ba761f8e71d36fae6a422a2de3bf98e0b6d8a4cf8e0" Namespace="calico-system" Pod="calico-kube-controllers-7fd865784c-rm28g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fd865784c--rm28g-eth0" Jul 7 06:06:16.718744 containerd[1447]: 2025-07-07 06:06:16.701 [INFO][4928] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce1bb968a4656baf7cb11ba761f8e71d36fae6a422a2de3bf98e0b6d8a4cf8e0" Namespace="calico-system" Pod="calico-kube-controllers-7fd865784c-rm28g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fd865784c--rm28g-eth0" Jul 7 06:06:16.718744 containerd[1447]: 2025-07-07 06:06:16.701 [INFO][4928] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ce1bb968a4656baf7cb11ba761f8e71d36fae6a422a2de3bf98e0b6d8a4cf8e0" Namespace="calico-system" Pod="calico-kube-controllers-7fd865784c-rm28g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fd865784c--rm28g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7fd865784c--rm28g-eth0", GenerateName:"calico-kube-controllers-7fd865784c-", Namespace:"calico-system", SelfLink:"", UID:"18c7c67e-01e0-40ea-8d99-ba460eb1fde4", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fd865784c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ce1bb968a4656baf7cb11ba761f8e71d36fae6a422a2de3bf98e0b6d8a4cf8e0", Pod:"calico-kube-controllers-7fd865784c-rm28g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7d41e44cdeb", MAC:"1a:f5:f1:24:26:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:06:16.718744 containerd[1447]: 2025-07-07 06:06:16.715 [INFO][4928] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ce1bb968a4656baf7cb11ba761f8e71d36fae6a422a2de3bf98e0b6d8a4cf8e0" Namespace="calico-system" Pod="calico-kube-controllers-7fd865784c-rm28g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fd865784c--rm28g-eth0" Jul 7 06:06:16.722683 kubelet[2474]: E0707 06:06:16.721634 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:06:16.722683 kubelet[2474]: E0707 06:06:16.721854 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:06:16.742449 kubelet[2474]: I0707 06:06:16.742379 2474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-6bhqv" podStartSLOduration=19.795668042 podStartE2EDuration="21.742353533s" podCreationTimestamp="2025-07-07 06:05:55 +0000 UTC" firstStartedPulling="2025-07-07 06:06:13.82609148 +0000 UTC m=+38.440588696" lastFinishedPulling="2025-07-07 06:06:15.772776931 +0000 UTC m=+40.387274187" observedRunningTime="2025-07-07 06:06:16.741792265 +0000 UTC m=+41.356289521" watchObservedRunningTime="2025-07-07 06:06:16.742353533 +0000 UTC m=+41.356850789" Jul 7 06:06:16.775597 containerd[1447]: time="2025-07-07T06:06:16.775418891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:06:16.775597 containerd[1447]: time="2025-07-07T06:06:16.775508735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:06:16.775597 containerd[1447]: time="2025-07-07T06:06:16.775524616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:06:16.777098 containerd[1447]: time="2025-07-07T06:06:16.776412661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:06:16.823514 systemd[1]: Started cri-containerd-ce1bb968a4656baf7cb11ba761f8e71d36fae6a422a2de3bf98e0b6d8a4cf8e0.scope - libcontainer container ce1bb968a4656baf7cb11ba761f8e71d36fae6a422a2de3bf98e0b6d8a4cf8e0. Jul 7 06:06:16.838461 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:06:16.857511 kubelet[2474]: I0707 06:06:16.857475 2474 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:06:16.857839 kubelet[2474]: E0707 06:06:16.857823 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:06:16.876340 containerd[1447]: time="2025-07-07T06:06:16.875755620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fd865784c-rm28g,Uid:18c7c67e-01e0-40ea-8d99-ba460eb1fde4,Namespace:calico-system,Attempt:1,} returns sandbox id \"ce1bb968a4656baf7cb11ba761f8e71d36fae6a422a2de3bf98e0b6d8a4cf8e0\"" Jul 7 06:06:17.359912 containerd[1447]: time="2025-07-07T06:06:17.359867728Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:17.360928 containerd[1447]: time="2025-07-07T06:06:17.360901300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 7 06:06:17.362268 containerd[1447]: time="2025-07-07T06:06:17.362229485Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:17.365095 containerd[1447]: time="2025-07-07T06:06:17.365043305Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:17.366553 containerd[1447]: time="2025-07-07T06:06:17.366507657Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.592883202s" Jul 7 06:06:17.366553 containerd[1447]: time="2025-07-07T06:06:17.366544259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 7 06:06:17.369501 containerd[1447]: time="2025-07-07T06:06:17.368938737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 06:06:17.371006 containerd[1447]: time="2025-07-07T06:06:17.370975598Z" level=info msg="CreateContainer within sandbox \"0157ce4012e3f8f72a772423f235c54a051c65f14284462d27bccfae553bfad9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:06:17.401349 kernel: bpftool[5032]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 7 06:06:17.466342 containerd[1447]: time="2025-07-07T06:06:17.465777689Z" level=info msg="StopPodSandbox for \"c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886\"" Jul 7 06:06:17.466342 containerd[1447]: time="2025-07-07T06:06:17.466106985Z" level=info msg="StopPodSandbox for \"c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0\"" Jul 7 06:06:17.492033 containerd[1447]: time="2025-07-07T06:06:17.491869260Z" level=info msg="CreateContainer within sandbox \"0157ce4012e3f8f72a772423f235c54a051c65f14284462d27bccfae553bfad9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a4815b7467d94a8d2d870e4c1a719d13ad1900c4d8ba8b35c84154f65b6226bf\"" Jul 7 06:06:17.495995 containerd[1447]: time="2025-07-07T06:06:17.492691581Z" level=info msg="StartContainer for \"a4815b7467d94a8d2d870e4c1a719d13ad1900c4d8ba8b35c84154f65b6226bf\"" Jul 7 06:06:17.535490 systemd[1]: Started cri-containerd-a4815b7467d94a8d2d870e4c1a719d13ad1900c4d8ba8b35c84154f65b6226bf.scope - libcontainer container a4815b7467d94a8d2d870e4c1a719d13ad1900c4d8ba8b35c84154f65b6226bf. Jul 7 06:06:17.591957 containerd[1447]: 2025-07-07 06:06:17.542 [INFO][5067] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" Jul 7 06:06:17.591957 containerd[1447]: 2025-07-07 06:06:17.544 [INFO][5067] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" iface="eth0" netns="/var/run/netns/cni-4a89e592-9bb8-e861-2358-7364742d7fbc" Jul 7 06:06:17.591957 containerd[1447]: 2025-07-07 06:06:17.544 [INFO][5067] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" iface="eth0" netns="/var/run/netns/cni-4a89e592-9bb8-e861-2358-7364742d7fbc" Jul 7 06:06:17.591957 containerd[1447]: 2025-07-07 06:06:17.544 [INFO][5067] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" iface="eth0" netns="/var/run/netns/cni-4a89e592-9bb8-e861-2358-7364742d7fbc" Jul 7 06:06:17.591957 containerd[1447]: 2025-07-07 06:06:17.544 [INFO][5067] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" Jul 7 06:06:17.591957 containerd[1447]: 2025-07-07 06:06:17.544 [INFO][5067] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" Jul 7 06:06:17.591957 containerd[1447]: 2025-07-07 06:06:17.567 [INFO][5103] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" HandleID="k8s-pod-network.c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" Workload="localhost-k8s-calico--apiserver--56bdbd79df--xdcfc-eth0" Jul 7 06:06:17.591957 containerd[1447]: 2025-07-07 06:06:17.567 [INFO][5103] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:06:17.591957 containerd[1447]: 2025-07-07 06:06:17.567 [INFO][5103] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:06:17.591957 containerd[1447]: 2025-07-07 06:06:17.580 [WARNING][5103] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" HandleID="k8s-pod-network.c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" Workload="localhost-k8s-calico--apiserver--56bdbd79df--xdcfc-eth0" Jul 7 06:06:17.591957 containerd[1447]: 2025-07-07 06:06:17.580 [INFO][5103] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" HandleID="k8s-pod-network.c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" Workload="localhost-k8s-calico--apiserver--56bdbd79df--xdcfc-eth0" Jul 7 06:06:17.591957 containerd[1447]: 2025-07-07 06:06:17.583 [INFO][5103] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:06:17.591957 containerd[1447]: 2025-07-07 06:06:17.588 [INFO][5067] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" Jul 7 06:06:17.592798 containerd[1447]: time="2025-07-07T06:06:17.592674768Z" level=info msg="TearDown network for sandbox \"c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886\" successfully" Jul 7 06:06:17.592798 containerd[1447]: time="2025-07-07T06:06:17.592704289Z" level=info msg="StopPodSandbox for \"c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886\" returns successfully" Jul 7 06:06:17.593803 containerd[1447]: time="2025-07-07T06:06:17.593529090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56bdbd79df-xdcfc,Uid:532a5639-1429-4a85-8fbb-b79c8b04dfd3,Namespace:calico-apiserver,Attempt:1,}" Jul 7 06:06:17.595971 containerd[1447]: time="2025-07-07T06:06:17.595939689Z" level=info msg="StartContainer for \"a4815b7467d94a8d2d870e4c1a719d13ad1900c4d8ba8b35c84154f65b6226bf\" returns successfully" Jul 7 06:06:17.622454 containerd[1447]: 2025-07-07 06:06:17.547 [INFO][5057] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" Jul 7 06:06:17.622454 containerd[1447]: 2025-07-07 06:06:17.547 [INFO][5057] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" iface="eth0" netns="/var/run/netns/cni-926bc6f5-e0a2-70ef-dadb-dbc4f0fd6deb" Jul 7 06:06:17.622454 containerd[1447]: 2025-07-07 06:06:17.547 [INFO][5057] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" iface="eth0" netns="/var/run/netns/cni-926bc6f5-e0a2-70ef-dadb-dbc4f0fd6deb" Jul 7 06:06:17.622454 containerd[1447]: 2025-07-07 06:06:17.548 [INFO][5057] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" iface="eth0" netns="/var/run/netns/cni-926bc6f5-e0a2-70ef-dadb-dbc4f0fd6deb" Jul 7 06:06:17.622454 containerd[1447]: 2025-07-07 06:06:17.548 [INFO][5057] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" Jul 7 06:06:17.622454 containerd[1447]: 2025-07-07 06:06:17.548 [INFO][5057] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" Jul 7 06:06:17.622454 containerd[1447]: 2025-07-07 06:06:17.594 [INFO][5110] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" HandleID="k8s-pod-network.c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" Workload="localhost-k8s-csi--node--driver--jghp4-eth0" Jul 7 06:06:17.622454 containerd[1447]: 2025-07-07 06:06:17.594 [INFO][5110] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:06:17.622454 containerd[1447]: 2025-07-07 06:06:17.595 [INFO][5110] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:06:17.622454 containerd[1447]: 2025-07-07 06:06:17.605 [WARNING][5110] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" HandleID="k8s-pod-network.c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" Workload="localhost-k8s-csi--node--driver--jghp4-eth0" Jul 7 06:06:17.622454 containerd[1447]: 2025-07-07 06:06:17.606 [INFO][5110] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" HandleID="k8s-pod-network.c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" Workload="localhost-k8s-csi--node--driver--jghp4-eth0" Jul 7 06:06:17.622454 containerd[1447]: 2025-07-07 06:06:17.610 [INFO][5110] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:06:17.622454 containerd[1447]: 2025-07-07 06:06:17.617 [INFO][5057] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" Jul 7 06:06:17.623652 containerd[1447]: time="2025-07-07T06:06:17.622699573Z" level=info msg="TearDown network for sandbox \"c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0\" successfully" Jul 7 06:06:17.623652 containerd[1447]: time="2025-07-07T06:06:17.622725535Z" level=info msg="StopPodSandbox for \"c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0\" returns successfully" Jul 7 06:06:17.623652 containerd[1447]: time="2025-07-07T06:06:17.623312524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jghp4,Uid:632e2793-d8ae-43c1-a1dd-7d580aa97009,Namespace:calico-system,Attempt:1,}" Jul 7 06:06:17.702866 systemd-networkd[1379]: vxlan.calico: Link UP Jul 7 06:06:17.702875 systemd-networkd[1379]: vxlan.calico: Gained carrier Jul 7 06:06:17.743332 kubelet[2474]: I0707 06:06:17.743035 2474 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:06:17.744333 kubelet[2474]: E0707 06:06:17.744229 2474 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:06:17.754263 kubelet[2474]: I0707 06:06:17.754183 2474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-56bdbd79df-bkhbd" podStartSLOduration=24.579830782 podStartE2EDuration="27.754166078s" podCreationTimestamp="2025-07-07 06:05:50 +0000 UTC" firstStartedPulling="2025-07-07 06:06:14.193067765 +0000 UTC m=+38.807565021" lastFinishedPulling="2025-07-07 06:06:17.367403061 +0000 UTC m=+41.981900317" observedRunningTime="2025-07-07 06:06:17.753987349 +0000 UTC m=+42.368484605" watchObservedRunningTime="2025-07-07 06:06:17.754166078 +0000 UTC m=+42.368663334" Jul 7 06:06:17.789166 systemd[1]: run-netns-cni\x2d926bc6f5\x2de0a2\x2d70ef\x2ddadb\x2ddbc4f0fd6deb.mount: Deactivated successfully. Jul 7 06:06:17.789248 systemd[1]: run-netns-cni\x2d4a89e592\x2d9bb8\x2de861\x2d2358\x2d7364742d7fbc.mount: Deactivated successfully. Jul 7 06:06:17.808865 systemd-networkd[1379]: cali05d0cc9473d: Link UP Jul 7 06:06:17.809003 systemd-networkd[1379]: cali05d0cc9473d: Gained carrier Jul 7 06:06:17.834245 containerd[1447]: 2025-07-07 06:06:17.690 [INFO][5164] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--56bdbd79df--xdcfc-eth0 calico-apiserver-56bdbd79df- calico-apiserver 532a5639-1429-4a85-8fbb-b79c8b04dfd3 1038 0 2025-07-07 06:05:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:56bdbd79df projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-56bdbd79df-xdcfc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali05d0cc9473d [] [] }} ContainerID="f851f5d8148b2337a05c8bff7ba0ea9c3bf51faec4c758774dd9b494d2d8815b" Namespace="calico-apiserver" Pod="calico-apiserver-56bdbd79df-xdcfc" WorkloadEndpoint="localhost-k8s-calico--apiserver--56bdbd79df--xdcfc-" Jul 7 06:06:17.834245 containerd[1447]: 2025-07-07 06:06:17.691 [INFO][5164] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f851f5d8148b2337a05c8bff7ba0ea9c3bf51faec4c758774dd9b494d2d8815b" Namespace="calico-apiserver" Pod="calico-apiserver-56bdbd79df-xdcfc" WorkloadEndpoint="localhost-k8s-calico--apiserver--56bdbd79df--xdcfc-eth0" Jul 7 06:06:17.834245 containerd[1447]: 2025-07-07 06:06:17.753 [INFO][5194] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f851f5d8148b2337a05c8bff7ba0ea9c3bf51faec4c758774dd9b494d2d8815b" HandleID="k8s-pod-network.f851f5d8148b2337a05c8bff7ba0ea9c3bf51faec4c758774dd9b494d2d8815b" Workload="localhost-k8s-calico--apiserver--56bdbd79df--xdcfc-eth0" Jul 7 06:06:17.834245 containerd[1447]: 2025-07-07 06:06:17.753 [INFO][5194] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f851f5d8148b2337a05c8bff7ba0ea9c3bf51faec4c758774dd9b494d2d8815b" HandleID="k8s-pod-network.f851f5d8148b2337a05c8bff7ba0ea9c3bf51faec4c758774dd9b494d2d8815b" Workload="localhost-k8s-calico--apiserver--56bdbd79df--xdcfc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004b0de0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-56bdbd79df-xdcfc", "timestamp":"2025-07-07 06:06:17.751116767 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:06:17.834245 containerd[1447]: 2025-07-07 06:06:17.753 [INFO][5194] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:06:17.834245 containerd[1447]: 2025-07-07 06:06:17.753 [INFO][5194] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:06:17.834245 containerd[1447]: 2025-07-07 06:06:17.753 [INFO][5194] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:06:17.834245 containerd[1447]: 2025-07-07 06:06:17.764 [INFO][5194] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f851f5d8148b2337a05c8bff7ba0ea9c3bf51faec4c758774dd9b494d2d8815b" host="localhost" Jul 7 06:06:17.834245 containerd[1447]: 2025-07-07 06:06:17.768 [INFO][5194] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:06:17.834245 containerd[1447]: 2025-07-07 06:06:17.773 [INFO][5194] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:06:17.834245 containerd[1447]: 2025-07-07 06:06:17.775 [INFO][5194] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:06:17.834245 containerd[1447]: 2025-07-07 06:06:17.777 [INFO][5194] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:06:17.834245 containerd[1447]: 2025-07-07 06:06:17.777 [INFO][5194] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f851f5d8148b2337a05c8bff7ba0ea9c3bf51faec4c758774dd9b494d2d8815b" host="localhost" Jul 7 06:06:17.834245 containerd[1447]: 2025-07-07 06:06:17.779 [INFO][5194] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f851f5d8148b2337a05c8bff7ba0ea9c3bf51faec4c758774dd9b494d2d8815b Jul 7 06:06:17.834245 containerd[1447]: 2025-07-07 06:06:17.786 [INFO][5194] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f851f5d8148b2337a05c8bff7ba0ea9c3bf51faec4c758774dd9b494d2d8815b" host="localhost" Jul 7 06:06:17.834245 containerd[1447]: 2025-07-07 06:06:17.800 [INFO][5194] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.f851f5d8148b2337a05c8bff7ba0ea9c3bf51faec4c758774dd9b494d2d8815b" host="localhost" Jul 7 06:06:17.834245 containerd[1447]: 2025-07-07 06:06:17.800 [INFO][5194] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.f851f5d8148b2337a05c8bff7ba0ea9c3bf51faec4c758774dd9b494d2d8815b" host="localhost" Jul 7 06:06:17.834245 containerd[1447]: 2025-07-07 06:06:17.800 [INFO][5194] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:06:17.834245 containerd[1447]: 2025-07-07 06:06:17.800 [INFO][5194] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="f851f5d8148b2337a05c8bff7ba0ea9c3bf51faec4c758774dd9b494d2d8815b" HandleID="k8s-pod-network.f851f5d8148b2337a05c8bff7ba0ea9c3bf51faec4c758774dd9b494d2d8815b" Workload="localhost-k8s-calico--apiserver--56bdbd79df--xdcfc-eth0" Jul 7 06:06:17.835215 containerd[1447]: 2025-07-07 06:06:17.805 [INFO][5164] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f851f5d8148b2337a05c8bff7ba0ea9c3bf51faec4c758774dd9b494d2d8815b" Namespace="calico-apiserver" Pod="calico-apiserver-56bdbd79df-xdcfc" WorkloadEndpoint="localhost-k8s-calico--apiserver--56bdbd79df--xdcfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56bdbd79df--xdcfc-eth0", GenerateName:"calico-apiserver-56bdbd79df-", Namespace:"calico-apiserver", SelfLink:"", UID:"532a5639-1429-4a85-8fbb-b79c8b04dfd3", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56bdbd79df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-56bdbd79df-xdcfc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali05d0cc9473d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:06:17.835215 containerd[1447]: 2025-07-07 06:06:17.805 [INFO][5164] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="f851f5d8148b2337a05c8bff7ba0ea9c3bf51faec4c758774dd9b494d2d8815b" Namespace="calico-apiserver" Pod="calico-apiserver-56bdbd79df-xdcfc" WorkloadEndpoint="localhost-k8s-calico--apiserver--56bdbd79df--xdcfc-eth0" Jul 7 06:06:17.835215 containerd[1447]: 2025-07-07 06:06:17.805 [INFO][5164] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali05d0cc9473d ContainerID="f851f5d8148b2337a05c8bff7ba0ea9c3bf51faec4c758774dd9b494d2d8815b" Namespace="calico-apiserver" Pod="calico-apiserver-56bdbd79df-xdcfc" WorkloadEndpoint="localhost-k8s-calico--apiserver--56bdbd79df--xdcfc-eth0" Jul 7 06:06:17.835215 containerd[1447]: 2025-07-07 06:06:17.807 [INFO][5164] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f851f5d8148b2337a05c8bff7ba0ea9c3bf51faec4c758774dd9b494d2d8815b" Namespace="calico-apiserver" Pod="calico-apiserver-56bdbd79df-xdcfc" WorkloadEndpoint="localhost-k8s-calico--apiserver--56bdbd79df--xdcfc-eth0" Jul 7 06:06:17.835215 containerd[1447]: 2025-07-07 06:06:17.813 [INFO][5164] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f851f5d8148b2337a05c8bff7ba0ea9c3bf51faec4c758774dd9b494d2d8815b" Namespace="calico-apiserver" Pod="calico-apiserver-56bdbd79df-xdcfc" WorkloadEndpoint="localhost-k8s-calico--apiserver--56bdbd79df--xdcfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56bdbd79df--xdcfc-eth0", GenerateName:"calico-apiserver-56bdbd79df-", Namespace:"calico-apiserver", SelfLink:"", UID:"532a5639-1429-4a85-8fbb-b79c8b04dfd3", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56bdbd79df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f851f5d8148b2337a05c8bff7ba0ea9c3bf51faec4c758774dd9b494d2d8815b", Pod:"calico-apiserver-56bdbd79df-xdcfc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali05d0cc9473d", MAC:"9a:8d:d0:70:11:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:06:17.835215 containerd[1447]: 2025-07-07 06:06:17.828 [INFO][5164] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f851f5d8148b2337a05c8bff7ba0ea9c3bf51faec4c758774dd9b494d2d8815b" Namespace="calico-apiserver" Pod="calico-apiserver-56bdbd79df-xdcfc" WorkloadEndpoint="localhost-k8s-calico--apiserver--56bdbd79df--xdcfc-eth0" Jul 7 06:06:17.880173 containerd[1447]: time="2025-07-07T06:06:17.880024426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:06:17.880173 containerd[1447]: time="2025-07-07T06:06:17.880076748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:06:17.880173 containerd[1447]: time="2025-07-07T06:06:17.880087589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:06:17.882269 containerd[1447]: time="2025-07-07T06:06:17.880489849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:06:17.914537 systemd-networkd[1379]: cali772e026cb76: Link UP Jul 7 06:06:17.915261 systemd-networkd[1379]: cali772e026cb76: Gained carrier Jul 7 06:06:17.916470 systemd[1]: Started cri-containerd-f851f5d8148b2337a05c8bff7ba0ea9c3bf51faec4c758774dd9b494d2d8815b.scope - libcontainer container f851f5d8148b2337a05c8bff7ba0ea9c3bf51faec4c758774dd9b494d2d8815b. Jul 7 06:06:17.931478 containerd[1447]: 2025-07-07 06:06:17.705 [INFO][5158] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--jghp4-eth0 csi-node-driver- calico-system 632e2793-d8ae-43c1-a1dd-7d580aa97009 1039 0 2025-07-07 06:05:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-jghp4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali772e026cb76 [] [] }} ContainerID="decb05b2aaacf86849f6d6230b0e9984897560671a382dbb0242459f9e34885c" Namespace="calico-system" Pod="csi-node-driver-jghp4" WorkloadEndpoint="localhost-k8s-csi--node--driver--jghp4-" Jul 7 06:06:17.931478 containerd[1447]: 2025-07-07 06:06:17.706 [INFO][5158] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="decb05b2aaacf86849f6d6230b0e9984897560671a382dbb0242459f9e34885c" Namespace="calico-system" Pod="csi-node-driver-jghp4" WorkloadEndpoint="localhost-k8s-csi--node--driver--jghp4-eth0" Jul 7 06:06:17.931478 containerd[1447]: 2025-07-07 06:06:17.761 [INFO][5201] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="decb05b2aaacf86849f6d6230b0e9984897560671a382dbb0242459f9e34885c" HandleID="k8s-pod-network.decb05b2aaacf86849f6d6230b0e9984897560671a382dbb0242459f9e34885c" Workload="localhost-k8s-csi--node--driver--jghp4-eth0" Jul 7 06:06:17.931478 containerd[1447]: 2025-07-07 06:06:17.761 [INFO][5201] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="decb05b2aaacf86849f6d6230b0e9984897560671a382dbb0242459f9e34885c" HandleID="k8s-pod-network.decb05b2aaacf86849f6d6230b0e9984897560671a382dbb0242459f9e34885c" Workload="localhost-k8s-csi--node--driver--jghp4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400038a8c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-jghp4", "timestamp":"2025-07-07 06:06:17.761653409 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:06:17.931478 containerd[1447]: 2025-07-07 06:06:17.761 [INFO][5201] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:06:17.931478 containerd[1447]: 2025-07-07 06:06:17.800 [INFO][5201] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:06:17.931478 containerd[1447]: 2025-07-07 06:06:17.800 [INFO][5201] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:06:17.931478 containerd[1447]: 2025-07-07 06:06:17.866 [INFO][5201] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.decb05b2aaacf86849f6d6230b0e9984897560671a382dbb0242459f9e34885c" host="localhost" Jul 7 06:06:17.931478 containerd[1447]: 2025-07-07 06:06:17.873 [INFO][5201] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:06:17.931478 containerd[1447]: 2025-07-07 06:06:17.881 [INFO][5201] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:06:17.931478 containerd[1447]: 2025-07-07 06:06:17.885 [INFO][5201] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:06:17.931478 containerd[1447]: 2025-07-07 06:06:17.888 [INFO][5201] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:06:17.931478 containerd[1447]: 2025-07-07 06:06:17.888 [INFO][5201] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.decb05b2aaacf86849f6d6230b0e9984897560671a382dbb0242459f9e34885c" host="localhost" Jul 7 06:06:17.931478 containerd[1447]: 2025-07-07 06:06:17.891 [INFO][5201] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.decb05b2aaacf86849f6d6230b0e9984897560671a382dbb0242459f9e34885c Jul 7 06:06:17.931478 containerd[1447]: 2025-07-07 06:06:17.895 [INFO][5201] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.decb05b2aaacf86849f6d6230b0e9984897560671a382dbb0242459f9e34885c" host="localhost" Jul 7 06:06:17.931478 containerd[1447]: 2025-07-07 06:06:17.904 [INFO][5201] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.decb05b2aaacf86849f6d6230b0e9984897560671a382dbb0242459f9e34885c" host="localhost" Jul 7 06:06:17.931478 containerd[1447]: 2025-07-07 06:06:17.904 [INFO][5201] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.decb05b2aaacf86849f6d6230b0e9984897560671a382dbb0242459f9e34885c" host="localhost" Jul 7 06:06:17.931478 containerd[1447]: 2025-07-07 06:06:17.904 [INFO][5201] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:06:17.931478 containerd[1447]: 2025-07-07 06:06:17.904 [INFO][5201] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="decb05b2aaacf86849f6d6230b0e9984897560671a382dbb0242459f9e34885c" HandleID="k8s-pod-network.decb05b2aaacf86849f6d6230b0e9984897560671a382dbb0242459f9e34885c" Workload="localhost-k8s-csi--node--driver--jghp4-eth0" Jul 7 06:06:17.932314 containerd[1447]: 2025-07-07 06:06:17.910 [INFO][5158] cni-plugin/k8s.go 418: Populated endpoint ContainerID="decb05b2aaacf86849f6d6230b0e9984897560671a382dbb0242459f9e34885c" Namespace="calico-system" Pod="csi-node-driver-jghp4" WorkloadEndpoint="localhost-k8s-csi--node--driver--jghp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jghp4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"632e2793-d8ae-43c1-a1dd-7d580aa97009", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-jghp4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali772e026cb76", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:06:17.932314 containerd[1447]: 2025-07-07 06:06:17.910 [INFO][5158] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="decb05b2aaacf86849f6d6230b0e9984897560671a382dbb0242459f9e34885c" Namespace="calico-system" Pod="csi-node-driver-jghp4" WorkloadEndpoint="localhost-k8s-csi--node--driver--jghp4-eth0" Jul 7 06:06:17.932314 containerd[1447]: 2025-07-07 06:06:17.910 [INFO][5158] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali772e026cb76 ContainerID="decb05b2aaacf86849f6d6230b0e9984897560671a382dbb0242459f9e34885c" Namespace="calico-system" Pod="csi-node-driver-jghp4" WorkloadEndpoint="localhost-k8s-csi--node--driver--jghp4-eth0" Jul 7 06:06:17.932314 containerd[1447]: 2025-07-07 06:06:17.914 [INFO][5158] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="decb05b2aaacf86849f6d6230b0e9984897560671a382dbb0242459f9e34885c" Namespace="calico-system" Pod="csi-node-driver-jghp4" WorkloadEndpoint="localhost-k8s-csi--node--driver--jghp4-eth0" Jul 7 06:06:17.932314 containerd[1447]: 2025-07-07 06:06:17.916 [INFO][5158] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="decb05b2aaacf86849f6d6230b0e9984897560671a382dbb0242459f9e34885c" Namespace="calico-system" Pod="csi-node-driver-jghp4" WorkloadEndpoint="localhost-k8s-csi--node--driver--jghp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jghp4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"632e2793-d8ae-43c1-a1dd-7d580aa97009", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"decb05b2aaacf86849f6d6230b0e9984897560671a382dbb0242459f9e34885c", Pod:"csi-node-driver-jghp4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali772e026cb76", MAC:"b2:60:e3:8c:8c:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:06:17.932314 containerd[1447]: 2025-07-07 06:06:17.928 [INFO][5158] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="decb05b2aaacf86849f6d6230b0e9984897560671a382dbb0242459f9e34885c" Namespace="calico-system" Pod="csi-node-driver-jghp4" WorkloadEndpoint="localhost-k8s-csi--node--driver--jghp4-eth0" Jul 7 06:06:17.946384 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:06:17.954785 systemd-networkd[1379]: cali7d41e44cdeb: Gained IPv6LL Jul 7 06:06:17.959816 containerd[1447]: time="2025-07-07T06:06:17.959668006Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 06:06:17.960161 containerd[1447]: time="2025-07-07T06:06:17.960099468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 06:06:17.960161 containerd[1447]: time="2025-07-07T06:06:17.960145030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:06:17.960638 containerd[1447]: time="2025-07-07T06:06:17.960389562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 06:06:17.985564 containerd[1447]: time="2025-07-07T06:06:17.985523366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56bdbd79df-xdcfc,Uid:532a5639-1429-4a85-8fbb-b79c8b04dfd3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f851f5d8148b2337a05c8bff7ba0ea9c3bf51faec4c758774dd9b494d2d8815b\"" Jul 7 06:06:17.991994 containerd[1447]: time="2025-07-07T06:06:17.991952164Z" level=info msg="CreateContainer within sandbox \"f851f5d8148b2337a05c8bff7ba0ea9c3bf51faec4c758774dd9b494d2d8815b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:06:17.995537 systemd[1]: Started cri-containerd-decb05b2aaacf86849f6d6230b0e9984897560671a382dbb0242459f9e34885c.scope - libcontainer container decb05b2aaacf86849f6d6230b0e9984897560671a382dbb0242459f9e34885c. Jul 7 06:06:18.009349 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:06:18.018200 containerd[1447]: time="2025-07-07T06:06:18.018154919Z" level=info msg="CreateContainer within sandbox \"f851f5d8148b2337a05c8bff7ba0ea9c3bf51faec4c758774dd9b494d2d8815b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"10feb9d098cb702f0f67574ccf2c8c2a8e31efa44f01c0efa1bf126914e35e14\"" Jul 7 06:06:18.019852 containerd[1447]: time="2025-07-07T06:06:18.019778558Z" level=info msg="StartContainer for \"10feb9d098cb702f0f67574ccf2c8c2a8e31efa44f01c0efa1bf126914e35e14\"" Jul 7 06:06:18.023781 containerd[1447]: time="2025-07-07T06:06:18.023744949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jghp4,Uid:632e2793-d8ae-43c1-a1dd-7d580aa97009,Namespace:calico-system,Attempt:1,} returns sandbox id \"decb05b2aaacf86849f6d6230b0e9984897560671a382dbb0242459f9e34885c\"" Jul 7 06:06:18.050687 systemd[1]: Started cri-containerd-10feb9d098cb702f0f67574ccf2c8c2a8e31efa44f01c0efa1bf126914e35e14.scope - libcontainer container 10feb9d098cb702f0f67574ccf2c8c2a8e31efa44f01c0efa1bf126914e35e14. Jul 7 06:06:18.104561 containerd[1447]: time="2025-07-07T06:06:18.104305762Z" level=info msg="StartContainer for \"10feb9d098cb702f0f67574ccf2c8c2a8e31efa44f01c0efa1bf126914e35e14\" returns successfully" Jul 7 06:06:18.766374 kubelet[2474]: I0707 06:06:18.765992 2474 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:06:18.977610 systemd-networkd[1379]: cali772e026cb76: Gained IPv6LL Jul 7 06:06:19.426501 systemd-networkd[1379]: vxlan.calico: Gained IPv6LL Jul 7 06:06:19.637010 containerd[1447]: time="2025-07-07T06:06:19.636947994Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:19.637802 containerd[1447]: time="2025-07-07T06:06:19.637711150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 7 06:06:19.638423 containerd[1447]: time="2025-07-07T06:06:19.638387622Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:19.640915 containerd[1447]: time="2025-07-07T06:06:19.640884380Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:19.642144 containerd[1447]: time="2025-07-07T06:06:19.641644936Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 2.272672237s" Jul 7 06:06:19.642144 containerd[1447]: time="2025-07-07T06:06:19.641687458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 7 06:06:19.643067 containerd[1447]: time="2025-07-07T06:06:19.643045442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 06:06:19.656753 containerd[1447]: time="2025-07-07T06:06:19.656714528Z" level=info msg="CreateContainer within sandbox \"ce1bb968a4656baf7cb11ba761f8e71d36fae6a422a2de3bf98e0b6d8a4cf8e0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 06:06:19.673049 containerd[1447]: time="2025-07-07T06:06:19.672997497Z" level=info msg="CreateContainer within sandbox \"ce1bb968a4656baf7cb11ba761f8e71d36fae6a422a2de3bf98e0b6d8a4cf8e0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"01a2d8e6e6c1b167e7f8de3652c787e1b002f19232daf345d1bd0913fdde7984\"" Jul 7 06:06:19.673517 containerd[1447]: time="2025-07-07T06:06:19.673492880Z" level=info msg="StartContainer for \"01a2d8e6e6c1b167e7f8de3652c787e1b002f19232daf345d1bd0913fdde7984\"" Jul 7 06:06:19.684432 systemd-networkd[1379]: cali05d0cc9473d: Gained IPv6LL Jul 7 06:06:19.702533 systemd[1]: Started cri-containerd-01a2d8e6e6c1b167e7f8de3652c787e1b002f19232daf345d1bd0913fdde7984.scope - libcontainer container 01a2d8e6e6c1b167e7f8de3652c787e1b002f19232daf345d1bd0913fdde7984. Jul 7 06:06:19.747586 containerd[1447]: time="2025-07-07T06:06:19.746846384Z" level=info msg="StartContainer for \"01a2d8e6e6c1b167e7f8de3652c787e1b002f19232daf345d1bd0913fdde7984\" returns successfully" Jul 7 06:06:19.788499 kubelet[2474]: I0707 06:06:19.788306 2474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-56bdbd79df-xdcfc" podStartSLOduration=29.78828582 podStartE2EDuration="29.78828582s" podCreationTimestamp="2025-07-07 06:05:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:06:18.801031783 +0000 UTC m=+43.415529039" watchObservedRunningTime="2025-07-07 06:06:19.78828582 +0000 UTC m=+44.402783076" Jul 7 06:06:19.789601 kubelet[2474]: I0707 06:06:19.789466 2474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7fd865784c-rm28g" podStartSLOduration=22.024069229 podStartE2EDuration="24.789455316s" podCreationTimestamp="2025-07-07 06:05:55 +0000 UTC" firstStartedPulling="2025-07-07 06:06:16.877260016 +0000 UTC m=+41.491757272" lastFinishedPulling="2025-07-07 06:06:19.642646103 +0000 UTC m=+44.257143359" observedRunningTime="2025-07-07 06:06:19.789177863 +0000 UTC m=+44.403675119" watchObservedRunningTime="2025-07-07 06:06:19.789455316 +0000 UTC m=+44.403952572" Jul 7 06:06:20.627549 systemd[1]: Started sshd@8-10.0.0.91:22-10.0.0.1:47830.service - OpenSSH per-connection server daemon (10.0.0.1:47830). Jul 7 06:06:20.695450 containerd[1447]: time="2025-07-07T06:06:20.695391902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:20.697193 sshd[5531]: Accepted publickey for core from 10.0.0.1 port 47830 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:06:20.698702 containerd[1447]: time="2025-07-07T06:06:20.698645173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 7 06:06:20.698919 sshd[5531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:06:20.700237 containerd[1447]: time="2025-07-07T06:06:20.700091759Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:20.704797 containerd[1447]: time="2025-07-07T06:06:20.704734854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:20.705464 containerd[1447]: time="2025-07-07T06:06:20.705425886Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.061076062s" Jul 7 06:06:20.705529 containerd[1447]: time="2025-07-07T06:06:20.705464768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 7 06:06:20.706056 systemd-logind[1421]: New session 9 of user core. Jul 7 06:06:20.710351 containerd[1447]: time="2025-07-07T06:06:20.709542916Z" level=info msg="CreateContainer within sandbox \"decb05b2aaacf86849f6d6230b0e9984897560671a382dbb0242459f9e34885c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 06:06:20.712481 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 06:06:20.730239 containerd[1447]: time="2025-07-07T06:06:20.730151068Z" level=info msg="CreateContainer within sandbox \"decb05b2aaacf86849f6d6230b0e9984897560671a382dbb0242459f9e34885c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"72b4fa90af4f64ea27099d24acff4fb505c12fcdd0749f9c2bc5cfed2fa5e96a\"" Jul 7 06:06:20.731465 containerd[1447]: time="2025-07-07T06:06:20.730975786Z" level=info msg="StartContainer for \"72b4fa90af4f64ea27099d24acff4fb505c12fcdd0749f9c2bc5cfed2fa5e96a\"" Jul 7 06:06:20.771553 systemd[1]: Started cri-containerd-72b4fa90af4f64ea27099d24acff4fb505c12fcdd0749f9c2bc5cfed2fa5e96a.scope - libcontainer container 72b4fa90af4f64ea27099d24acff4fb505c12fcdd0749f9c2bc5cfed2fa5e96a. Jul 7 06:06:20.827361 containerd[1447]: time="2025-07-07T06:06:20.827077866Z" level=info msg="StartContainer for \"72b4fa90af4f64ea27099d24acff4fb505c12fcdd0749f9c2bc5cfed2fa5e96a\" returns successfully" Jul 7 06:06:20.830011 containerd[1447]: time="2025-07-07T06:06:20.829983400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 06:06:21.020493 sshd[5531]: pam_unix(sshd:session): session closed for user core Jul 7 06:06:21.024671 systemd[1]: sshd@8-10.0.0.91:22-10.0.0.1:47830.service: Deactivated successfully. Jul 7 06:06:21.027850 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 06:06:21.029221 systemd-logind[1421]: Session 9 logged out. Waiting for processes to exit. Jul 7 06:06:21.031584 systemd-logind[1421]: Removed session 9. Jul 7 06:06:21.288710 kubelet[2474]: I0707 06:06:21.288598 2474 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:06:21.944228 containerd[1447]: time="2025-07-07T06:06:21.944179325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:21.944883 containerd[1447]: time="2025-07-07T06:06:21.944842115Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 7 06:06:21.946410 containerd[1447]: time="2025-07-07T06:06:21.946373744Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:21.948649 containerd[1447]: time="2025-07-07T06:06:21.948592525Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:06:21.949473 containerd[1447]: time="2025-07-07T06:06:21.949424482Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.119272915s" Jul 7 06:06:21.949473 containerd[1447]: time="2025-07-07T06:06:21.949466884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 7 06:06:21.952465 containerd[1447]: time="2025-07-07T06:06:21.952431458Z" level=info msg="CreateContainer within sandbox \"decb05b2aaacf86849f6d6230b0e9984897560671a382dbb0242459f9e34885c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 06:06:21.972187 containerd[1447]: time="2025-07-07T06:06:21.972127109Z" level=info msg="CreateContainer within sandbox \"decb05b2aaacf86849f6d6230b0e9984897560671a382dbb0242459f9e34885c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"06e6212af96f030b0d8d7b4b8d20987ffae524097c32db70eaa461182c9a001e\"" Jul 7 06:06:21.973073 containerd[1447]: time="2025-07-07T06:06:21.973038551Z" level=info msg="StartContainer for \"06e6212af96f030b0d8d7b4b8d20987ffae524097c32db70eaa461182c9a001e\"" Jul 7 06:06:22.007603 systemd[1]: Started cri-containerd-06e6212af96f030b0d8d7b4b8d20987ffae524097c32db70eaa461182c9a001e.scope - libcontainer container 06e6212af96f030b0d8d7b4b8d20987ffae524097c32db70eaa461182c9a001e. Jul 7 06:06:22.040062 containerd[1447]: time="2025-07-07T06:06:22.039963024Z" level=info msg="StartContainer for \"06e6212af96f030b0d8d7b4b8d20987ffae524097c32db70eaa461182c9a001e\" returns successfully" Jul 7 06:06:22.556571 kubelet[2474]: I0707 06:06:22.556522 2474 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 06:06:22.558991 kubelet[2474]: I0707 06:06:22.558964 2474 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 06:06:22.804526 kubelet[2474]: I0707 06:06:22.804415 2474 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-jghp4" podStartSLOduration=23.880437868 podStartE2EDuration="27.804384514s" podCreationTimestamp="2025-07-07 06:05:55 +0000 UTC" firstStartedPulling="2025-07-07 06:06:18.026419759 +0000 UTC m=+42.640917015" lastFinishedPulling="2025-07-07 06:06:21.950366405 +0000 UTC m=+46.564863661" observedRunningTime="2025-07-07 06:06:22.804036138 +0000 UTC m=+47.418533394" watchObservedRunningTime="2025-07-07 06:06:22.804384514 +0000 UTC m=+47.418881770" Jul 7 06:06:23.427141 kubelet[2474]: I0707 06:06:23.426942 2474 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:06:26.032704 systemd[1]: Started sshd@9-10.0.0.91:22-10.0.0.1:38006.service - OpenSSH per-connection server daemon (10.0.0.1:38006). Jul 7 06:06:26.106698 sshd[5707]: Accepted publickey for core from 10.0.0.1 port 38006 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:06:26.108918 sshd[5707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:06:26.114884 systemd-logind[1421]: New session 10 of user core. Jul 7 06:06:26.121466 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 06:06:26.429063 sshd[5707]: pam_unix(sshd:session): session closed for user core Jul 7 06:06:26.439525 systemd[1]: sshd@9-10.0.0.91:22-10.0.0.1:38006.service: Deactivated successfully. Jul 7 06:06:26.441633 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 06:06:26.443064 systemd-logind[1421]: Session 10 logged out. Waiting for processes to exit. Jul 7 06:06:26.451636 systemd[1]: Started sshd@10-10.0.0.91:22-10.0.0.1:38016.service - OpenSSH per-connection server daemon (10.0.0.1:38016). Jul 7 06:06:26.453693 systemd-logind[1421]: Removed session 10. Jul 7 06:06:26.486277 sshd[5728]: Accepted publickey for core from 10.0.0.1 port 38016 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:06:26.487468 sshd[5728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:06:26.492411 systemd-logind[1421]: New session 11 of user core. Jul 7 06:06:26.498454 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 06:06:26.734582 sshd[5728]: pam_unix(sshd:session): session closed for user core Jul 7 06:06:26.746598 systemd[1]: sshd@10-10.0.0.91:22-10.0.0.1:38016.service: Deactivated successfully. Jul 7 06:06:26.748186 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 06:06:26.752012 systemd-logind[1421]: Session 11 logged out. Waiting for processes to exit. Jul 7 06:06:26.759097 systemd[1]: Started sshd@11-10.0.0.91:22-10.0.0.1:38028.service - OpenSSH per-connection server daemon (10.0.0.1:38028). Jul 7 06:06:26.761049 systemd-logind[1421]: Removed session 11. Jul 7 06:06:26.795610 sshd[5741]: Accepted publickey for core from 10.0.0.1 port 38028 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:06:26.796832 sshd[5741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:06:26.800967 systemd-logind[1421]: New session 12 of user core. Jul 7 06:06:26.809457 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 06:06:26.940201 sshd[5741]: pam_unix(sshd:session): session closed for user core Jul 7 06:06:26.943485 systemd[1]: sshd@11-10.0.0.91:22-10.0.0.1:38028.service: Deactivated successfully. Jul 7 06:06:26.945910 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 06:06:26.948529 systemd-logind[1421]: Session 12 logged out. Waiting for processes to exit. Jul 7 06:06:26.949478 systemd-logind[1421]: Removed session 12. Jul 7 06:06:31.950361 systemd[1]: Started sshd@12-10.0.0.91:22-10.0.0.1:38038.service - OpenSSH per-connection server daemon (10.0.0.1:38038). Jul 7 06:06:31.988789 sshd[5765]: Accepted publickey for core from 10.0.0.1 port 38038 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:06:31.990100 sshd[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:06:31.993541 systemd-logind[1421]: New session 13 of user core. Jul 7 06:06:32.002483 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 06:06:32.115909 sshd[5765]: pam_unix(sshd:session): session closed for user core Jul 7 06:06:32.127110 systemd[1]: sshd@12-10.0.0.91:22-10.0.0.1:38038.service: Deactivated successfully. Jul 7 06:06:32.129036 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 06:06:32.132035 systemd-logind[1421]: Session 13 logged out. Waiting for processes to exit. Jul 7 06:06:32.140597 systemd[1]: Started sshd@13-10.0.0.91:22-10.0.0.1:38048.service - OpenSSH per-connection server daemon (10.0.0.1:38048). Jul 7 06:06:32.142452 systemd-logind[1421]: Removed session 13. Jul 7 06:06:32.175983 sshd[5779]: Accepted publickey for core from 10.0.0.1 port 38048 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:06:32.177174 sshd[5779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:06:32.180998 systemd-logind[1421]: New session 14 of user core. Jul 7 06:06:32.191461 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 06:06:32.386100 sshd[5779]: pam_unix(sshd:session): session closed for user core Jul 7 06:06:32.397771 systemd[1]: sshd@13-10.0.0.91:22-10.0.0.1:38048.service: Deactivated successfully. Jul 7 06:06:32.400260 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 06:06:32.401880 systemd-logind[1421]: Session 14 logged out. Waiting for processes to exit. Jul 7 06:06:32.407569 systemd[1]: Started sshd@14-10.0.0.91:22-10.0.0.1:38054.service - OpenSSH per-connection server daemon (10.0.0.1:38054). Jul 7 06:06:32.409246 systemd-logind[1421]: Removed session 14. Jul 7 06:06:32.445392 sshd[5792]: Accepted publickey for core from 10.0.0.1 port 38054 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:06:32.446708 sshd[5792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:06:32.451207 systemd-logind[1421]: New session 15 of user core. Jul 7 06:06:32.461476 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 06:06:34.087550 sshd[5792]: pam_unix(sshd:session): session closed for user core Jul 7 06:06:34.112434 systemd[1]: Started sshd@15-10.0.0.91:22-10.0.0.1:34874.service - OpenSSH per-connection server daemon (10.0.0.1:34874). Jul 7 06:06:34.112950 systemd[1]: sshd@14-10.0.0.91:22-10.0.0.1:38054.service: Deactivated successfully. Jul 7 06:06:34.114712 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 06:06:34.123493 systemd-logind[1421]: Session 15 logged out. Waiting for processes to exit. Jul 7 06:06:34.131820 systemd-logind[1421]: Removed session 15. Jul 7 06:06:34.167951 sshd[5830]: Accepted publickey for core from 10.0.0.1 port 34874 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:06:34.172043 sshd[5830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:06:34.178409 systemd-logind[1421]: New session 16 of user core. Jul 7 06:06:34.189519 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 06:06:34.627126 sshd[5830]: pam_unix(sshd:session): session closed for user core Jul 7 06:06:34.636725 systemd[1]: sshd@15-10.0.0.91:22-10.0.0.1:34874.service: Deactivated successfully. Jul 7 06:06:34.639940 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 06:06:34.643503 systemd-logind[1421]: Session 16 logged out. Waiting for processes to exit. Jul 7 06:06:34.649660 systemd[1]: Started sshd@16-10.0.0.91:22-10.0.0.1:34890.service - OpenSSH per-connection server daemon (10.0.0.1:34890). Jul 7 06:06:34.650596 systemd-logind[1421]: Removed session 16. Jul 7 06:06:34.685255 sshd[5850]: Accepted publickey for core from 10.0.0.1 port 34890 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:06:34.686095 sshd[5850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:06:34.690395 systemd-logind[1421]: New session 17 of user core. Jul 7 06:06:34.697492 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 06:06:34.825583 sshd[5850]: pam_unix(sshd:session): session closed for user core Jul 7 06:06:34.828077 systemd[1]: sshd@16-10.0.0.91:22-10.0.0.1:34890.service: Deactivated successfully. Jul 7 06:06:34.830291 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 06:06:34.832299 systemd-logind[1421]: Session 17 logged out. Waiting for processes to exit. Jul 7 06:06:34.833836 systemd-logind[1421]: Removed session 17. Jul 7 06:06:35.444875 containerd[1447]: time="2025-07-07T06:06:35.444441658Z" level=info msg="StopPodSandbox for \"3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4\"" Jul 7 06:06:35.552647 containerd[1447]: 2025-07-07 06:06:35.501 [WARNING][5873] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--4gfqc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cac05d19-02ca-4bd4-9a83-2d4df21aa5b9", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"17f37cec86c64a5cc7224d4e4578ebabf7a11090f2cc100e8759ae5fbe3ace52", Pod:"coredns-7c65d6cfc9-4gfqc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali91441370f95", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:06:35.552647 containerd[1447]: 2025-07-07 06:06:35.501 [INFO][5873] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" Jul 7 06:06:35.552647 containerd[1447]: 2025-07-07 06:06:35.501 [INFO][5873] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" iface="eth0" netns="" Jul 7 06:06:35.552647 containerd[1447]: 2025-07-07 06:06:35.501 [INFO][5873] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" Jul 7 06:06:35.552647 containerd[1447]: 2025-07-07 06:06:35.501 [INFO][5873] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" Jul 7 06:06:35.552647 containerd[1447]: 2025-07-07 06:06:35.535 [INFO][5883] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" HandleID="k8s-pod-network.3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" Workload="localhost-k8s-coredns--7c65d6cfc9--4gfqc-eth0" Jul 7 06:06:35.552647 containerd[1447]: 2025-07-07 06:06:35.536 [INFO][5883] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:06:35.552647 containerd[1447]: 2025-07-07 06:06:35.536 [INFO][5883] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:06:35.552647 containerd[1447]: 2025-07-07 06:06:35.546 [WARNING][5883] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" HandleID="k8s-pod-network.3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" Workload="localhost-k8s-coredns--7c65d6cfc9--4gfqc-eth0" Jul 7 06:06:35.552647 containerd[1447]: 2025-07-07 06:06:35.546 [INFO][5883] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" HandleID="k8s-pod-network.3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" Workload="localhost-k8s-coredns--7c65d6cfc9--4gfqc-eth0" Jul 7 06:06:35.552647 containerd[1447]: 2025-07-07 06:06:35.548 [INFO][5883] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:06:35.552647 containerd[1447]: 2025-07-07 06:06:35.550 [INFO][5873] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" Jul 7 06:06:35.553048 containerd[1447]: time="2025-07-07T06:06:35.552686586Z" level=info msg="TearDown network for sandbox \"3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4\" successfully" Jul 7 06:06:35.553048 containerd[1447]: time="2025-07-07T06:06:35.552714067Z" level=info msg="StopPodSandbox for \"3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4\" returns successfully" Jul 7 06:06:35.553567 containerd[1447]: time="2025-07-07T06:06:35.553348970Z" level=info msg="RemovePodSandbox for \"3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4\"" Jul 7 06:06:35.563243 containerd[1447]: time="2025-07-07T06:06:35.562943162Z" level=info msg="Forcibly stopping sandbox \"3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4\"" Jul 7 06:06:35.621381 containerd[1447]: 2025-07-07 06:06:35.592 [WARNING][5901] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--4gfqc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cac05d19-02ca-4bd4-9a83-2d4df21aa5b9", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"17f37cec86c64a5cc7224d4e4578ebabf7a11090f2cc100e8759ae5fbe3ace52", Pod:"coredns-7c65d6cfc9-4gfqc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali91441370f95", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:06:35.621381 containerd[1447]: 2025-07-07 06:06:35.592 [INFO][5901] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" Jul 7 06:06:35.621381 containerd[1447]: 2025-07-07 06:06:35.592 [INFO][5901] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" iface="eth0" netns="" Jul 7 06:06:35.621381 containerd[1447]: 2025-07-07 06:06:35.592 [INFO][5901] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" Jul 7 06:06:35.621381 containerd[1447]: 2025-07-07 06:06:35.592 [INFO][5901] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" Jul 7 06:06:35.621381 containerd[1447]: 2025-07-07 06:06:35.609 [INFO][5910] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" HandleID="k8s-pod-network.3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" Workload="localhost-k8s-coredns--7c65d6cfc9--4gfqc-eth0" Jul 7 06:06:35.621381 containerd[1447]: 2025-07-07 06:06:35.609 [INFO][5910] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:06:35.621381 containerd[1447]: 2025-07-07 06:06:35.609 [INFO][5910] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:06:35.621381 containerd[1447]: 2025-07-07 06:06:35.617 [WARNING][5910] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" HandleID="k8s-pod-network.3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" Workload="localhost-k8s-coredns--7c65d6cfc9--4gfqc-eth0" Jul 7 06:06:35.621381 containerd[1447]: 2025-07-07 06:06:35.617 [INFO][5910] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" HandleID="k8s-pod-network.3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" Workload="localhost-k8s-coredns--7c65d6cfc9--4gfqc-eth0" Jul 7 06:06:35.621381 containerd[1447]: 2025-07-07 06:06:35.618 [INFO][5910] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:06:35.621381 containerd[1447]: 2025-07-07 06:06:35.619 [INFO][5901] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4" Jul 7 06:06:35.621763 containerd[1447]: time="2025-07-07T06:06:35.621417946Z" level=info msg="TearDown network for sandbox \"3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4\" successfully" Jul 7 06:06:35.630118 containerd[1447]: time="2025-07-07T06:06:35.630072863Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:06:35.630200 containerd[1447]: time="2025-07-07T06:06:35.630175107Z" level=info msg="RemovePodSandbox \"3fe77de978fa82271a537e2f1ea00c514a3b0e8456e181f49881ed89d7801fc4\" returns successfully" Jul 7 06:06:35.630849 containerd[1447]: time="2025-07-07T06:06:35.630824891Z" level=info msg="StopPodSandbox for \"c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0\"" Jul 7 06:06:35.694050 containerd[1447]: 2025-07-07 06:06:35.662 [WARNING][5927] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jghp4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"632e2793-d8ae-43c1-a1dd-7d580aa97009", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"decb05b2aaacf86849f6d6230b0e9984897560671a382dbb0242459f9e34885c", Pod:"csi-node-driver-jghp4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali772e026cb76", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:06:35.694050 containerd[1447]: 2025-07-07 06:06:35.662 [INFO][5927] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" Jul 7 06:06:35.694050 containerd[1447]: 2025-07-07 06:06:35.662 [INFO][5927] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" iface="eth0" netns="" Jul 7 06:06:35.694050 containerd[1447]: 2025-07-07 06:06:35.662 [INFO][5927] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" Jul 7 06:06:35.694050 containerd[1447]: 2025-07-07 06:06:35.662 [INFO][5927] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" Jul 7 06:06:35.694050 containerd[1447]: 2025-07-07 06:06:35.681 [INFO][5935] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" HandleID="k8s-pod-network.c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" Workload="localhost-k8s-csi--node--driver--jghp4-eth0" Jul 7 06:06:35.694050 containerd[1447]: 2025-07-07 06:06:35.682 [INFO][5935] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:06:35.694050 containerd[1447]: 2025-07-07 06:06:35.682 [INFO][5935] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:06:35.694050 containerd[1447]: 2025-07-07 06:06:35.689 [WARNING][5935] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" HandleID="k8s-pod-network.c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" Workload="localhost-k8s-csi--node--driver--jghp4-eth0" Jul 7 06:06:35.694050 containerd[1447]: 2025-07-07 06:06:35.689 [INFO][5935] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" HandleID="k8s-pod-network.c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" Workload="localhost-k8s-csi--node--driver--jghp4-eth0" Jul 7 06:06:35.694050 containerd[1447]: 2025-07-07 06:06:35.691 [INFO][5935] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:06:35.694050 containerd[1447]: 2025-07-07 06:06:35.692 [INFO][5927] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" Jul 7 06:06:35.694503 containerd[1447]: time="2025-07-07T06:06:35.694092970Z" level=info msg="TearDown network for sandbox \"c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0\" successfully" Jul 7 06:06:35.694503 containerd[1447]: time="2025-07-07T06:06:35.694118531Z" level=info msg="StopPodSandbox for \"c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0\" returns successfully" Jul 7 06:06:35.695130 containerd[1447]: time="2025-07-07T06:06:35.694779276Z" level=info msg="RemovePodSandbox for \"c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0\"" Jul 7 06:06:35.695130 containerd[1447]: time="2025-07-07T06:06:35.694814957Z" level=info msg="Forcibly stopping sandbox \"c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0\"" Jul 7 06:06:35.760938 containerd[1447]: 2025-07-07 06:06:35.728 [WARNING][5952] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jghp4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"632e2793-d8ae-43c1-a1dd-7d580aa97009", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"decb05b2aaacf86849f6d6230b0e9984897560671a382dbb0242459f9e34885c", Pod:"csi-node-driver-jghp4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali772e026cb76", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:06:35.760938 containerd[1447]: 2025-07-07 06:06:35.729 [INFO][5952] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" Jul 7 06:06:35.760938 containerd[1447]: 2025-07-07 06:06:35.729 [INFO][5952] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" iface="eth0" netns="" Jul 7 06:06:35.760938 containerd[1447]: 2025-07-07 06:06:35.729 [INFO][5952] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" Jul 7 06:06:35.760938 containerd[1447]: 2025-07-07 06:06:35.729 [INFO][5952] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" Jul 7 06:06:35.760938 containerd[1447]: 2025-07-07 06:06:35.747 [INFO][5961] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" HandleID="k8s-pod-network.c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" Workload="localhost-k8s-csi--node--driver--jghp4-eth0" Jul 7 06:06:35.760938 containerd[1447]: 2025-07-07 06:06:35.748 [INFO][5961] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:06:35.760938 containerd[1447]: 2025-07-07 06:06:35.748 [INFO][5961] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:06:35.760938 containerd[1447]: 2025-07-07 06:06:35.756 [WARNING][5961] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" HandleID="k8s-pod-network.c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" Workload="localhost-k8s-csi--node--driver--jghp4-eth0" Jul 7 06:06:35.760938 containerd[1447]: 2025-07-07 06:06:35.756 [INFO][5961] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" HandleID="k8s-pod-network.c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" Workload="localhost-k8s-csi--node--driver--jghp4-eth0" Jul 7 06:06:35.760938 containerd[1447]: 2025-07-07 06:06:35.757 [INFO][5961] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:06:35.760938 containerd[1447]: 2025-07-07 06:06:35.759 [INFO][5952] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0" Jul 7 06:06:35.761350 containerd[1447]: time="2025-07-07T06:06:35.760977783Z" level=info msg="TearDown network for sandbox \"c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0\" successfully" Jul 7 06:06:35.763835 containerd[1447]: time="2025-07-07T06:06:35.763796886Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:06:35.763886 containerd[1447]: time="2025-07-07T06:06:35.763858208Z" level=info msg="RemovePodSandbox \"c6f4cd5a38525001c9d62f061cd4fc61f469a9be592ab0e6f8f946bd1f32c4f0\" returns successfully" Jul 7 06:06:35.764300 containerd[1447]: time="2025-07-07T06:06:35.764263743Z" level=info msg="StopPodSandbox for \"c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886\"" Jul 7 06:06:35.827581 containerd[1447]: 2025-07-07 06:06:35.794 [WARNING][5978] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56bdbd79df--xdcfc-eth0", GenerateName:"calico-apiserver-56bdbd79df-", Namespace:"calico-apiserver", SelfLink:"", UID:"532a5639-1429-4a85-8fbb-b79c8b04dfd3", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56bdbd79df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f851f5d8148b2337a05c8bff7ba0ea9c3bf51faec4c758774dd9b494d2d8815b", Pod:"calico-apiserver-56bdbd79df-xdcfc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali05d0cc9473d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:06:35.827581 containerd[1447]: 2025-07-07 06:06:35.794 [INFO][5978] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" Jul 7 06:06:35.827581 containerd[1447]: 2025-07-07 06:06:35.794 [INFO][5978] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" iface="eth0" netns="" Jul 7 06:06:35.827581 containerd[1447]: 2025-07-07 06:06:35.794 [INFO][5978] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" Jul 7 06:06:35.827581 containerd[1447]: 2025-07-07 06:06:35.795 [INFO][5978] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" Jul 7 06:06:35.827581 containerd[1447]: 2025-07-07 06:06:35.813 [INFO][5987] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" HandleID="k8s-pod-network.c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" Workload="localhost-k8s-calico--apiserver--56bdbd79df--xdcfc-eth0" Jul 7 06:06:35.827581 containerd[1447]: 2025-07-07 06:06:35.813 [INFO][5987] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:06:35.827581 containerd[1447]: 2025-07-07 06:06:35.813 [INFO][5987] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:06:35.827581 containerd[1447]: 2025-07-07 06:06:35.822 [WARNING][5987] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" HandleID="k8s-pod-network.c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" Workload="localhost-k8s-calico--apiserver--56bdbd79df--xdcfc-eth0" Jul 7 06:06:35.827581 containerd[1447]: 2025-07-07 06:06:35.822 [INFO][5987] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" HandleID="k8s-pod-network.c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" Workload="localhost-k8s-calico--apiserver--56bdbd79df--xdcfc-eth0" Jul 7 06:06:35.827581 containerd[1447]: 2025-07-07 06:06:35.824 [INFO][5987] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:06:35.827581 containerd[1447]: 2025-07-07 06:06:35.826 [INFO][5978] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" Jul 7 06:06:35.827581 containerd[1447]: time="2025-07-07T06:06:35.827472060Z" level=info msg="TearDown network for sandbox \"c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886\" successfully" Jul 7 06:06:35.827581 containerd[1447]: time="2025-07-07T06:06:35.827494021Z" level=info msg="StopPodSandbox for \"c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886\" returns successfully" Jul 7 06:06:35.828905 containerd[1447]: time="2025-07-07T06:06:35.827785192Z" level=info msg="RemovePodSandbox for \"c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886\"" Jul 7 06:06:35.828905 containerd[1447]: time="2025-07-07T06:06:35.827812033Z" level=info msg="Forcibly stopping sandbox \"c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886\"" Jul 7 06:06:35.897070 containerd[1447]: 2025-07-07 06:06:35.859 [WARNING][6005] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56bdbd79df--xdcfc-eth0", GenerateName:"calico-apiserver-56bdbd79df-", Namespace:"calico-apiserver", SelfLink:"", UID:"532a5639-1429-4a85-8fbb-b79c8b04dfd3", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56bdbd79df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f851f5d8148b2337a05c8bff7ba0ea9c3bf51faec4c758774dd9b494d2d8815b", Pod:"calico-apiserver-56bdbd79df-xdcfc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali05d0cc9473d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:06:35.897070 containerd[1447]: 2025-07-07 06:06:35.860 [INFO][6005] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" Jul 7 06:06:35.897070 containerd[1447]: 2025-07-07 06:06:35.860 [INFO][6005] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" iface="eth0" netns="" Jul 7 06:06:35.897070 containerd[1447]: 2025-07-07 06:06:35.860 [INFO][6005] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" Jul 7 06:06:35.897070 containerd[1447]: 2025-07-07 06:06:35.860 [INFO][6005] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" Jul 7 06:06:35.897070 containerd[1447]: 2025-07-07 06:06:35.877 [INFO][6014] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" HandleID="k8s-pod-network.c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" Workload="localhost-k8s-calico--apiserver--56bdbd79df--xdcfc-eth0" Jul 7 06:06:35.897070 containerd[1447]: 2025-07-07 06:06:35.877 [INFO][6014] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:06:35.897070 containerd[1447]: 2025-07-07 06:06:35.877 [INFO][6014] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:06:35.897070 containerd[1447]: 2025-07-07 06:06:35.885 [WARNING][6014] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" HandleID="k8s-pod-network.c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" Workload="localhost-k8s-calico--apiserver--56bdbd79df--xdcfc-eth0" Jul 7 06:06:35.897070 containerd[1447]: 2025-07-07 06:06:35.885 [INFO][6014] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" HandleID="k8s-pod-network.c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" Workload="localhost-k8s-calico--apiserver--56bdbd79df--xdcfc-eth0" Jul 7 06:06:35.897070 containerd[1447]: 2025-07-07 06:06:35.886 [INFO][6014] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:06:35.897070 containerd[1447]: 2025-07-07 06:06:35.888 [INFO][6005] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886" Jul 7 06:06:35.897511 containerd[1447]: time="2025-07-07T06:06:35.897114254Z" level=info msg="TearDown network for sandbox \"c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886\" successfully" Jul 7 06:06:35.900261 containerd[1447]: time="2025-07-07T06:06:35.900213607Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:06:35.904649 containerd[1447]: time="2025-07-07T06:06:35.904605648Z" level=info msg="RemovePodSandbox \"c3237ec8485aef64a3e9a6d090db605f78c546c4bd9750a894714c0d95658886\" returns successfully" Jul 7 06:06:35.905188 containerd[1447]: time="2025-07-07T06:06:35.905161709Z" level=info msg="StopPodSandbox for \"5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40\"" Jul 7 06:06:35.968553 containerd[1447]: 2025-07-07 06:06:35.936 [WARNING][6031] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7fd865784c--rm28g-eth0", GenerateName:"calico-kube-controllers-7fd865784c-", Namespace:"calico-system", SelfLink:"", UID:"18c7c67e-01e0-40ea-8d99-ba460eb1fde4", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fd865784c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ce1bb968a4656baf7cb11ba761f8e71d36fae6a422a2de3bf98e0b6d8a4cf8e0", Pod:"calico-kube-controllers-7fd865784c-rm28g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7d41e44cdeb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:06:35.968553 containerd[1447]: 2025-07-07 06:06:35.936 [INFO][6031] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" Jul 7 06:06:35.968553 containerd[1447]: 2025-07-07 06:06:35.936 [INFO][6031] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" iface="eth0" netns="" Jul 7 06:06:35.968553 containerd[1447]: 2025-07-07 06:06:35.936 [INFO][6031] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" Jul 7 06:06:35.968553 containerd[1447]: 2025-07-07 06:06:35.936 [INFO][6031] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" Jul 7 06:06:35.968553 containerd[1447]: 2025-07-07 06:06:35.953 [INFO][6040] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" HandleID="k8s-pod-network.5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" Workload="localhost-k8s-calico--kube--controllers--7fd865784c--rm28g-eth0" Jul 7 06:06:35.968553 containerd[1447]: 2025-07-07 06:06:35.954 [INFO][6040] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:06:35.968553 containerd[1447]: 2025-07-07 06:06:35.954 [INFO][6040] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:06:35.968553 containerd[1447]: 2025-07-07 06:06:35.962 [WARNING][6040] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" HandleID="k8s-pod-network.5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" Workload="localhost-k8s-calico--kube--controllers--7fd865784c--rm28g-eth0" Jul 7 06:06:35.968553 containerd[1447]: 2025-07-07 06:06:35.963 [INFO][6040] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" HandleID="k8s-pod-network.5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" Workload="localhost-k8s-calico--kube--controllers--7fd865784c--rm28g-eth0" Jul 7 06:06:35.968553 containerd[1447]: 2025-07-07 06:06:35.965 [INFO][6040] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:06:35.968553 containerd[1447]: 2025-07-07 06:06:35.966 [INFO][6031] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" Jul 7 06:06:35.968553 containerd[1447]: time="2025-07-07T06:06:35.968509271Z" level=info msg="TearDown network for sandbox \"5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40\" successfully" Jul 7 06:06:35.968553 containerd[1447]: time="2025-07-07T06:06:35.968534032Z" level=info msg="StopPodSandbox for \"5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40\" returns successfully" Jul 7 06:06:35.969128 containerd[1447]: time="2025-07-07T06:06:35.968960768Z" level=info msg="RemovePodSandbox for \"5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40\"" Jul 7 06:06:35.969128 containerd[1447]: time="2025-07-07T06:06:35.968993049Z" level=info msg="Forcibly stopping sandbox \"5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40\"" Jul 7 06:06:36.031696 containerd[1447]: 2025-07-07 06:06:36.001 [WARNING][6058] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7fd865784c--rm28g-eth0", GenerateName:"calico-kube-controllers-7fd865784c-", Namespace:"calico-system", SelfLink:"", UID:"18c7c67e-01e0-40ea-8d99-ba460eb1fde4", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fd865784c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ce1bb968a4656baf7cb11ba761f8e71d36fae6a422a2de3bf98e0b6d8a4cf8e0", Pod:"calico-kube-controllers-7fd865784c-rm28g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7d41e44cdeb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:06:36.031696 containerd[1447]: 2025-07-07 06:06:36.001 [INFO][6058] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" Jul 7 06:06:36.031696 containerd[1447]: 2025-07-07 06:06:36.001 [INFO][6058] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" iface="eth0" netns="" Jul 7 06:06:36.031696 containerd[1447]: 2025-07-07 06:06:36.001 [INFO][6058] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" Jul 7 06:06:36.031696 containerd[1447]: 2025-07-07 06:06:36.001 [INFO][6058] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" Jul 7 06:06:36.031696 containerd[1447]: 2025-07-07 06:06:36.018 [INFO][6067] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" HandleID="k8s-pod-network.5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" Workload="localhost-k8s-calico--kube--controllers--7fd865784c--rm28g-eth0" Jul 7 06:06:36.031696 containerd[1447]: 2025-07-07 06:06:36.018 [INFO][6067] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:06:36.031696 containerd[1447]: 2025-07-07 06:06:36.018 [INFO][6067] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:06:36.031696 containerd[1447]: 2025-07-07 06:06:36.026 [WARNING][6067] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" HandleID="k8s-pod-network.5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" Workload="localhost-k8s-calico--kube--controllers--7fd865784c--rm28g-eth0" Jul 7 06:06:36.031696 containerd[1447]: 2025-07-07 06:06:36.026 [INFO][6067] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" HandleID="k8s-pod-network.5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" Workload="localhost-k8s-calico--kube--controllers--7fd865784c--rm28g-eth0" Jul 7 06:06:36.031696 containerd[1447]: 2025-07-07 06:06:36.028 [INFO][6067] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:06:36.031696 containerd[1447]: 2025-07-07 06:06:36.030 [INFO][6058] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40" Jul 7 06:06:36.032081 containerd[1447]: time="2025-07-07T06:06:36.031738978Z" level=info msg="TearDown network for sandbox \"5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40\" successfully" Jul 7 06:06:36.044265 containerd[1447]: time="2025-07-07T06:06:36.044220911Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:06:36.044340 containerd[1447]: time="2025-07-07T06:06:36.044294914Z" level=info msg="RemovePodSandbox \"5735aa1cdedbd9fd9a1c5440fd9f7322f6ed7c0cf0bed3155706f06f7e741c40\" returns successfully" Jul 7 06:06:36.044774 containerd[1447]: time="2025-07-07T06:06:36.044748650Z" level=info msg="StopPodSandbox for \"431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37\"" Jul 7 06:06:36.107379 containerd[1447]: 2025-07-07 06:06:36.076 [WARNING][6085] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56bdbd79df--bkhbd-eth0", GenerateName:"calico-apiserver-56bdbd79df-", Namespace:"calico-apiserver", SelfLink:"", UID:"528c9db9-c113-4e11-bd08-112e227a85e1", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56bdbd79df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0157ce4012e3f8f72a772423f235c54a051c65f14284462d27bccfae553bfad9", Pod:"calico-apiserver-56bdbd79df-bkhbd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali09a41d31e60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:06:36.107379 containerd[1447]: 2025-07-07 06:06:36.076 [INFO][6085] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" Jul 7 06:06:36.107379 containerd[1447]: 2025-07-07 06:06:36.077 [INFO][6085] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" iface="eth0" netns="" Jul 7 06:06:36.107379 containerd[1447]: 2025-07-07 06:06:36.077 [INFO][6085] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" Jul 7 06:06:36.107379 containerd[1447]: 2025-07-07 06:06:36.077 [INFO][6085] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" Jul 7 06:06:36.107379 containerd[1447]: 2025-07-07 06:06:36.094 [INFO][6094] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" HandleID="k8s-pod-network.431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" Workload="localhost-k8s-calico--apiserver--56bdbd79df--bkhbd-eth0" Jul 7 06:06:36.107379 containerd[1447]: 2025-07-07 06:06:36.094 [INFO][6094] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:06:36.107379 containerd[1447]: 2025-07-07 06:06:36.094 [INFO][6094] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:06:36.107379 containerd[1447]: 2025-07-07 06:06:36.102 [WARNING][6094] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" HandleID="k8s-pod-network.431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" Workload="localhost-k8s-calico--apiserver--56bdbd79df--bkhbd-eth0" Jul 7 06:06:36.107379 containerd[1447]: 2025-07-07 06:06:36.102 [INFO][6094] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" HandleID="k8s-pod-network.431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" Workload="localhost-k8s-calico--apiserver--56bdbd79df--bkhbd-eth0" Jul 7 06:06:36.107379 containerd[1447]: 2025-07-07 06:06:36.103 [INFO][6094] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:06:36.107379 containerd[1447]: 2025-07-07 06:06:36.105 [INFO][6085] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" Jul 7 06:06:36.107757 containerd[1447]: time="2025-07-07T06:06:36.107392924Z" level=info msg="TearDown network for sandbox \"431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37\" successfully" Jul 7 06:06:36.107757 containerd[1447]: time="2025-07-07T06:06:36.107418925Z" level=info msg="StopPodSandbox for \"431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37\" returns successfully" Jul 7 06:06:36.108151 containerd[1447]: time="2025-07-07T06:06:36.108120990Z" level=info msg="RemovePodSandbox for \"431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37\"" Jul 7 06:06:36.108151 containerd[1447]: time="2025-07-07T06:06:36.108149711Z" level=info msg="Forcibly stopping sandbox \"431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37\"" Jul 7 06:06:36.169920 containerd[1447]: 2025-07-07 06:06:36.139 [WARNING][6112] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56bdbd79df--bkhbd-eth0", GenerateName:"calico-apiserver-56bdbd79df-", Namespace:"calico-apiserver", SelfLink:"", UID:"528c9db9-c113-4e11-bd08-112e227a85e1", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56bdbd79df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0157ce4012e3f8f72a772423f235c54a051c65f14284462d27bccfae553bfad9", Pod:"calico-apiserver-56bdbd79df-bkhbd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali09a41d31e60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:06:36.169920 containerd[1447]: 2025-07-07 06:06:36.139 [INFO][6112] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" Jul 7 06:06:36.169920 containerd[1447]: 2025-07-07 06:06:36.139 [INFO][6112] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" iface="eth0" netns="" Jul 7 06:06:36.169920 containerd[1447]: 2025-07-07 06:06:36.139 [INFO][6112] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" Jul 7 06:06:36.169920 containerd[1447]: 2025-07-07 06:06:36.139 [INFO][6112] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" Jul 7 06:06:36.169920 containerd[1447]: 2025-07-07 06:06:36.156 [INFO][6121] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" HandleID="k8s-pod-network.431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" Workload="localhost-k8s-calico--apiserver--56bdbd79df--bkhbd-eth0" Jul 7 06:06:36.169920 containerd[1447]: 2025-07-07 06:06:36.156 [INFO][6121] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:06:36.169920 containerd[1447]: 2025-07-07 06:06:36.156 [INFO][6121] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:06:36.169920 containerd[1447]: 2025-07-07 06:06:36.165 [WARNING][6121] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" HandleID="k8s-pod-network.431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" Workload="localhost-k8s-calico--apiserver--56bdbd79df--bkhbd-eth0" Jul 7 06:06:36.169920 containerd[1447]: 2025-07-07 06:06:36.165 [INFO][6121] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" HandleID="k8s-pod-network.431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" Workload="localhost-k8s-calico--apiserver--56bdbd79df--bkhbd-eth0" Jul 7 06:06:36.169920 containerd[1447]: 2025-07-07 06:06:36.166 [INFO][6121] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:06:36.169920 containerd[1447]: 2025-07-07 06:06:36.168 [INFO][6112] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37" Jul 7 06:06:36.170297 containerd[1447]: time="2025-07-07T06:06:36.169956115Z" level=info msg="TearDown network for sandbox \"431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37\" successfully" Jul 7 06:06:36.172733 containerd[1447]: time="2025-07-07T06:06:36.172692974Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:06:36.172778 containerd[1447]: time="2025-07-07T06:06:36.172754016Z" level=info msg="RemovePodSandbox \"431c1eb7652c9df90c87e6231849bc58a2d5e348f9afc8346c80a94e7e394b37\" returns successfully" Jul 7 06:06:36.173177 containerd[1447]: time="2025-07-07T06:06:36.173142591Z" level=info msg="StopPodSandbox for \"71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163\"" Jul 7 06:06:36.237885 containerd[1447]: 2025-07-07 06:06:36.205 [WARNING][6141] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" WorkloadEndpoint="localhost-k8s-whisker--656c4558cd--j6wss-eth0" Jul 7 06:06:36.237885 containerd[1447]: 2025-07-07 06:06:36.205 [INFO][6141] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" Jul 7 06:06:36.237885 containerd[1447]: 2025-07-07 06:06:36.205 [INFO][6141] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" iface="eth0" netns="" Jul 7 06:06:36.237885 containerd[1447]: 2025-07-07 06:06:36.205 [INFO][6141] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" Jul 7 06:06:36.237885 containerd[1447]: 2025-07-07 06:06:36.205 [INFO][6141] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" Jul 7 06:06:36.237885 containerd[1447]: 2025-07-07 06:06:36.225 [INFO][6149] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" HandleID="k8s-pod-network.71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" Workload="localhost-k8s-whisker--656c4558cd--j6wss-eth0" Jul 7 06:06:36.237885 containerd[1447]: 2025-07-07 06:06:36.225 [INFO][6149] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:06:36.237885 containerd[1447]: 2025-07-07 06:06:36.225 [INFO][6149] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:06:36.237885 containerd[1447]: 2025-07-07 06:06:36.232 [WARNING][6149] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" HandleID="k8s-pod-network.71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" Workload="localhost-k8s-whisker--656c4558cd--j6wss-eth0" Jul 7 06:06:36.237885 containerd[1447]: 2025-07-07 06:06:36.232 [INFO][6149] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" HandleID="k8s-pod-network.71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" Workload="localhost-k8s-whisker--656c4558cd--j6wss-eth0" Jul 7 06:06:36.237885 containerd[1447]: 2025-07-07 06:06:36.234 [INFO][6149] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:06:36.237885 containerd[1447]: 2025-07-07 06:06:36.236 [INFO][6141] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" Jul 7 06:06:36.237885 containerd[1447]: time="2025-07-07T06:06:36.237846299Z" level=info msg="TearDown network for sandbox \"71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163\" successfully" Jul 7 06:06:36.237885 containerd[1447]: time="2025-07-07T06:06:36.237870620Z" level=info msg="StopPodSandbox for \"71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163\" returns successfully" Jul 7 06:06:36.238921 containerd[1447]: time="2025-07-07T06:06:36.238363798Z" level=info msg="RemovePodSandbox for \"71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163\"" Jul 7 06:06:36.238921 containerd[1447]: time="2025-07-07T06:06:36.238397159Z" level=info msg="Forcibly stopping sandbox \"71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163\"" Jul 7 06:06:36.301206 containerd[1447]: 2025-07-07 06:06:36.270 [WARNING][6166] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" WorkloadEndpoint="localhost-k8s-whisker--656c4558cd--j6wss-eth0" Jul 7 06:06:36.301206 containerd[1447]: 2025-07-07 06:06:36.270 [INFO][6166] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" Jul 7 06:06:36.301206 containerd[1447]: 2025-07-07 06:06:36.270 [INFO][6166] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" iface="eth0" netns="" Jul 7 06:06:36.301206 containerd[1447]: 2025-07-07 06:06:36.270 [INFO][6166] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" Jul 7 06:06:36.301206 containerd[1447]: 2025-07-07 06:06:36.270 [INFO][6166] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" Jul 7 06:06:36.301206 containerd[1447]: 2025-07-07 06:06:36.287 [INFO][6175] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" HandleID="k8s-pod-network.71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" Workload="localhost-k8s-whisker--656c4558cd--j6wss-eth0" Jul 7 06:06:36.301206 containerd[1447]: 2025-07-07 06:06:36.287 [INFO][6175] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:06:36.301206 containerd[1447]: 2025-07-07 06:06:36.287 [INFO][6175] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:06:36.301206 containerd[1447]: 2025-07-07 06:06:36.296 [WARNING][6175] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" HandleID="k8s-pod-network.71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" Workload="localhost-k8s-whisker--656c4558cd--j6wss-eth0" Jul 7 06:06:36.301206 containerd[1447]: 2025-07-07 06:06:36.296 [INFO][6175] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" HandleID="k8s-pod-network.71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" Workload="localhost-k8s-whisker--656c4558cd--j6wss-eth0" Jul 7 06:06:36.301206 containerd[1447]: 2025-07-07 06:06:36.298 [INFO][6175] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:06:36.301206 containerd[1447]: 2025-07-07 06:06:36.299 [INFO][6166] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163" Jul 7 06:06:36.301555 containerd[1447]: time="2025-07-07T06:06:36.301277201Z" level=info msg="TearDown network for sandbox \"71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163\" successfully" Jul 7 06:06:36.304229 containerd[1447]: time="2025-07-07T06:06:36.304167826Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:06:36.304277 containerd[1447]: time="2025-07-07T06:06:36.304265310Z" level=info msg="RemovePodSandbox \"71e5c2239c04e208fdd03bb7effb2ad8a142a508ed8d870766444cc28ec0d163\" returns successfully" Jul 7 06:06:36.304792 containerd[1447]: time="2025-07-07T06:06:36.304756448Z" level=info msg="StopPodSandbox for \"d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb\"" Jul 7 06:06:36.372621 containerd[1447]: 2025-07-07 06:06:36.336 [WARNING][6193] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--lczkp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8ef96388-886e-494c-b484-12fab2731020", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e358aacf12b5c9685bf2d531de582ce5dc51c9f1676fb17cc4e2b26e31a634e7", Pod:"coredns-7c65d6cfc9-lczkp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidac1869deb1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:06:36.372621 containerd[1447]: 2025-07-07 06:06:36.337 [INFO][6193] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" Jul 7 06:06:36.372621 containerd[1447]: 2025-07-07 06:06:36.337 [INFO][6193] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" iface="eth0" netns="" Jul 7 06:06:36.372621 containerd[1447]: 2025-07-07 06:06:36.337 [INFO][6193] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" Jul 7 06:06:36.372621 containerd[1447]: 2025-07-07 06:06:36.337 [INFO][6193] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" Jul 7 06:06:36.372621 containerd[1447]: 2025-07-07 06:06:36.354 [INFO][6202] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" HandleID="k8s-pod-network.d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" Workload="localhost-k8s-coredns--7c65d6cfc9--lczkp-eth0" Jul 7 06:06:36.372621 containerd[1447]: 2025-07-07 06:06:36.354 [INFO][6202] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:06:36.372621 containerd[1447]: 2025-07-07 06:06:36.354 [INFO][6202] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:06:36.372621 containerd[1447]: 2025-07-07 06:06:36.362 [WARNING][6202] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" HandleID="k8s-pod-network.d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" Workload="localhost-k8s-coredns--7c65d6cfc9--lczkp-eth0" Jul 7 06:06:36.372621 containerd[1447]: 2025-07-07 06:06:36.362 [INFO][6202] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" HandleID="k8s-pod-network.d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" Workload="localhost-k8s-coredns--7c65d6cfc9--lczkp-eth0" Jul 7 06:06:36.372621 containerd[1447]: 2025-07-07 06:06:36.364 [INFO][6202] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:06:36.372621 containerd[1447]: 2025-07-07 06:06:36.370 [INFO][6193] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" Jul 7 06:06:36.373113 containerd[1447]: time="2025-07-07T06:06:36.372664713Z" level=info msg="TearDown network for sandbox \"d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb\" successfully" Jul 7 06:06:36.373113 containerd[1447]: time="2025-07-07T06:06:36.372688354Z" level=info msg="StopPodSandbox for \"d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb\" returns successfully" Jul 7 06:06:36.373910 containerd[1447]: time="2025-07-07T06:06:36.373581146Z" level=info msg="RemovePodSandbox for \"d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb\"" Jul 7 06:06:36.373910 containerd[1447]: time="2025-07-07T06:06:36.373615667Z" level=info msg="Forcibly stopping sandbox \"d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb\"" Jul 7 06:06:36.433355 containerd[1447]: 2025-07-07 06:06:36.402 [WARNING][6219] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--lczkp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8ef96388-886e-494c-b484-12fab2731020", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e358aacf12b5c9685bf2d531de582ce5dc51c9f1676fb17cc4e2b26e31a634e7", Pod:"coredns-7c65d6cfc9-lczkp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidac1869deb1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:06:36.433355 containerd[1447]: 2025-07-07 06:06:36.403 [INFO][6219] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" Jul 7 06:06:36.433355 containerd[1447]: 2025-07-07 06:06:36.403 [INFO][6219] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" iface="eth0" netns="" Jul 7 06:06:36.433355 containerd[1447]: 2025-07-07 06:06:36.403 [INFO][6219] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" Jul 7 06:06:36.433355 containerd[1447]: 2025-07-07 06:06:36.403 [INFO][6219] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" Jul 7 06:06:36.433355 containerd[1447]: 2025-07-07 06:06:36.420 [INFO][6227] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" HandleID="k8s-pod-network.d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" Workload="localhost-k8s-coredns--7c65d6cfc9--lczkp-eth0" Jul 7 06:06:36.433355 containerd[1447]: 2025-07-07 06:06:36.421 [INFO][6227] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:06:36.433355 containerd[1447]: 2025-07-07 06:06:36.421 [INFO][6227] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:06:36.433355 containerd[1447]: 2025-07-07 06:06:36.428 [WARNING][6227] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" HandleID="k8s-pod-network.d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" Workload="localhost-k8s-coredns--7c65d6cfc9--lczkp-eth0" Jul 7 06:06:36.433355 containerd[1447]: 2025-07-07 06:06:36.428 [INFO][6227] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" HandleID="k8s-pod-network.d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" Workload="localhost-k8s-coredns--7c65d6cfc9--lczkp-eth0" Jul 7 06:06:36.433355 containerd[1447]: 2025-07-07 06:06:36.430 [INFO][6227] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:06:36.433355 containerd[1447]: 2025-07-07 06:06:36.431 [INFO][6219] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb" Jul 7 06:06:36.435053 containerd[1447]: time="2025-07-07T06:06:36.433796812Z" level=info msg="TearDown network for sandbox \"d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb\" successfully" Jul 7 06:06:36.436487 containerd[1447]: time="2025-07-07T06:06:36.436456988Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:06:36.436608 containerd[1447]: time="2025-07-07T06:06:36.436590513Z" level=info msg="RemovePodSandbox \"d7b7dbedcdbac166eeee7ef95256b4195fa8d6e006100fb812fac9b3e2d718bb\" returns successfully" Jul 7 06:06:36.437124 containerd[1447]: time="2025-07-07T06:06:36.437103292Z" level=info msg="StopPodSandbox for \"f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc\"" Jul 7 06:06:36.499781 containerd[1447]: 2025-07-07 06:06:36.468 [WARNING][6245] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--6bhqv-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"66fd4b0e-a42b-41a5-a8e7-b55cecdc3007", ResourceVersion:"1103", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"07a2fd3ab1b62d5a4be8c25f553a5c97f840cdbf074b810e8b2cded761303a67", Pod:"goldmane-58fd7646b9-6bhqv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali56917c25625", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:06:36.499781 containerd[1447]: 2025-07-07 06:06:36.469 [INFO][6245] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" Jul 7 06:06:36.499781 containerd[1447]: 2025-07-07 06:06:36.469 [INFO][6245] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" iface="eth0" netns="" Jul 7 06:06:36.499781 containerd[1447]: 2025-07-07 06:06:36.469 [INFO][6245] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" Jul 7 06:06:36.499781 containerd[1447]: 2025-07-07 06:06:36.469 [INFO][6245] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" Jul 7 06:06:36.499781 containerd[1447]: 2025-07-07 06:06:36.486 [INFO][6254] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" HandleID="k8s-pod-network.f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" Workload="localhost-k8s-goldmane--58fd7646b9--6bhqv-eth0" Jul 7 06:06:36.499781 containerd[1447]: 2025-07-07 06:06:36.486 [INFO][6254] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:06:36.499781 containerd[1447]: 2025-07-07 06:06:36.486 [INFO][6254] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:06:36.499781 containerd[1447]: 2025-07-07 06:06:36.494 [WARNING][6254] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" HandleID="k8s-pod-network.f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" Workload="localhost-k8s-goldmane--58fd7646b9--6bhqv-eth0" Jul 7 06:06:36.499781 containerd[1447]: 2025-07-07 06:06:36.495 [INFO][6254] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" HandleID="k8s-pod-network.f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" Workload="localhost-k8s-goldmane--58fd7646b9--6bhqv-eth0" Jul 7 06:06:36.499781 containerd[1447]: 2025-07-07 06:06:36.496 [INFO][6254] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:06:36.499781 containerd[1447]: 2025-07-07 06:06:36.497 [INFO][6245] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" Jul 7 06:06:36.500411 containerd[1447]: time="2025-07-07T06:06:36.499817728Z" level=info msg="TearDown network for sandbox \"f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc\" successfully" Jul 7 06:06:36.500411 containerd[1447]: time="2025-07-07T06:06:36.499842889Z" level=info msg="StopPodSandbox for \"f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc\" returns successfully" Jul 7 06:06:36.500411 containerd[1447]: time="2025-07-07T06:06:36.500291465Z" level=info msg="RemovePodSandbox for \"f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc\"" Jul 7 06:06:36.500411 containerd[1447]: time="2025-07-07T06:06:36.500347467Z" level=info msg="Forcibly stopping sandbox \"f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc\"" Jul 7 06:06:36.565002 containerd[1447]: 2025-07-07 06:06:36.530 [WARNING][6272] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--6bhqv-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"66fd4b0e-a42b-41a5-a8e7-b55cecdc3007", ResourceVersion:"1103", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 5, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"07a2fd3ab1b62d5a4be8c25f553a5c97f840cdbf074b810e8b2cded761303a67", Pod:"goldmane-58fd7646b9-6bhqv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali56917c25625", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:06:36.565002 containerd[1447]: 2025-07-07 06:06:36.530 [INFO][6272] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" Jul 7 06:06:36.565002 containerd[1447]: 2025-07-07 06:06:36.530 [INFO][6272] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" iface="eth0" netns="" Jul 7 06:06:36.565002 containerd[1447]: 2025-07-07 06:06:36.530 [INFO][6272] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" Jul 7 06:06:36.565002 containerd[1447]: 2025-07-07 06:06:36.530 [INFO][6272] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" Jul 7 06:06:36.565002 containerd[1447]: 2025-07-07 06:06:36.549 [INFO][6280] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" HandleID="k8s-pod-network.f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" Workload="localhost-k8s-goldmane--58fd7646b9--6bhqv-eth0" Jul 7 06:06:36.565002 containerd[1447]: 2025-07-07 06:06:36.549 [INFO][6280] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:06:36.565002 containerd[1447]: 2025-07-07 06:06:36.549 [INFO][6280] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:06:36.565002 containerd[1447]: 2025-07-07 06:06:36.558 [WARNING][6280] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" HandleID="k8s-pod-network.f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" Workload="localhost-k8s-goldmane--58fd7646b9--6bhqv-eth0" Jul 7 06:06:36.565002 containerd[1447]: 2025-07-07 06:06:36.558 [INFO][6280] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" HandleID="k8s-pod-network.f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" Workload="localhost-k8s-goldmane--58fd7646b9--6bhqv-eth0" Jul 7 06:06:36.565002 containerd[1447]: 2025-07-07 06:06:36.560 [INFO][6280] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:06:36.565002 containerd[1447]: 2025-07-07 06:06:36.562 [INFO][6272] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc" Jul 7 06:06:36.565002 containerd[1447]: time="2025-07-07T06:06:36.563753089Z" level=info msg="TearDown network for sandbox \"f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc\" successfully" Jul 7 06:06:36.566861 containerd[1447]: time="2025-07-07T06:06:36.566833921Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 06:06:36.566982 containerd[1447]: time="2025-07-07T06:06:36.566965165Z" level=info msg="RemovePodSandbox \"f640c532412165321f96231f7b26b66c70c02b0b1c79bfd18b772c47a583bacc\" returns successfully" Jul 7 06:06:39.837100 systemd[1]: Started sshd@17-10.0.0.91:22-10.0.0.1:34892.service - OpenSSH per-connection server daemon (10.0.0.1:34892). Jul 7 06:06:39.876269 sshd[6300]: Accepted publickey for core from 10.0.0.1 port 34892 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:06:39.877128 sshd[6300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:06:39.881009 systemd-logind[1421]: New session 18 of user core. Jul 7 06:06:39.891450 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 06:06:40.003197 sshd[6300]: pam_unix(sshd:session): session closed for user core Jul 7 06:06:40.008010 systemd[1]: sshd@17-10.0.0.91:22-10.0.0.1:34892.service: Deactivated successfully. Jul 7 06:06:40.009895 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 06:06:40.012086 systemd-logind[1421]: Session 18 logged out. Waiting for processes to exit. Jul 7 06:06:40.013152 systemd-logind[1421]: Removed session 18. Jul 7 06:06:45.014345 systemd[1]: Started sshd@18-10.0.0.91:22-10.0.0.1:55194.service - OpenSSH per-connection server daemon (10.0.0.1:55194). Jul 7 06:06:45.053237 sshd[6339]: Accepted publickey for core from 10.0.0.1 port 55194 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:06:45.054610 sshd[6339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:06:45.059649 systemd-logind[1421]: New session 19 of user core. Jul 7 06:06:45.068470 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 06:06:45.186736 sshd[6339]: pam_unix(sshd:session): session closed for user core Jul 7 06:06:45.189706 systemd[1]: sshd@18-10.0.0.91:22-10.0.0.1:55194.service: Deactivated successfully. Jul 7 06:06:45.191466 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 06:06:45.192080 systemd-logind[1421]: Session 19 logged out. Waiting for processes to exit. Jul 7 06:06:45.193126 systemd-logind[1421]: Removed session 19. Jul 7 06:06:50.198136 systemd[1]: Started sshd@19-10.0.0.91:22-10.0.0.1:55210.service - OpenSSH per-connection server daemon (10.0.0.1:55210). Jul 7 06:06:50.242629 sshd[6374]: Accepted publickey for core from 10.0.0.1 port 55210 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 06:06:50.243814 sshd[6374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:06:50.247537 systemd-logind[1421]: New session 20 of user core. Jul 7 06:06:50.257519 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 06:06:50.373870 sshd[6374]: pam_unix(sshd:session): session closed for user core Jul 7 06:06:50.377693 systemd[1]: sshd@19-10.0.0.91:22-10.0.0.1:55210.service: Deactivated successfully. Jul 7 06:06:50.379620 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 06:06:50.381022 systemd-logind[1421]: Session 20 logged out. Waiting for processes to exit. Jul 7 06:06:50.382132 systemd-logind[1421]: Removed session 20.