Jul 14 21:52:18.978932 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 14 21:52:18.978954 kernel: Linux version 6.6.97-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Jul 14 20:26:44 -00 2025 Jul 14 21:52:18.978964 kernel: KASLR enabled Jul 14 21:52:18.978970 kernel: efi: EFI v2.7 by EDK II Jul 14 21:52:18.978975 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 14 21:52:18.978981 kernel: random: crng init done Jul 14 21:52:18.978988 kernel: ACPI: Early table checksum verification disabled Jul 14 21:52:18.978994 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 14 21:52:18.979000 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 14 21:52:18.979008 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:52:18.979038 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:52:18.979044 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:52:18.979053 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:52:18.979061 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:52:18.979069 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:52:18.979080 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:52:18.979088 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:52:18.979095 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:52:18.979102 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 14 21:52:18.979108 kernel: NUMA: Failed to initialise from firmware Jul 14 21:52:18.979115 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:52:18.979121 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jul 14 21:52:18.979128 kernel: Zone ranges: Jul 14 21:52:18.979134 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:52:18.979140 kernel: DMA32 empty Jul 14 21:52:18.979148 kernel: Normal empty Jul 14 21:52:18.979155 kernel: Movable zone start for each node Jul 14 21:52:18.979161 kernel: Early memory node ranges Jul 14 21:52:18.979168 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 14 21:52:18.979174 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 14 21:52:18.979181 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 14 21:52:18.979187 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 14 21:52:18.979193 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 14 21:52:18.979200 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 14 21:52:18.979206 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 14 21:52:18.979213 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:52:18.979219 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 14 21:52:18.979233 kernel: psci: probing for conduit method from ACPI. Jul 14 21:52:18.979240 kernel: psci: PSCIv1.1 detected in firmware. Jul 14 21:52:18.979247 kernel: psci: Using standard PSCI v0.2 function IDs Jul 14 21:52:18.979256 kernel: psci: Trusted OS migration not required Jul 14 21:52:18.979263 kernel: psci: SMC Calling Convention v1.1 Jul 14 21:52:18.979270 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 14 21:52:18.979278 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 14 21:52:18.979285 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 14 21:52:18.979292 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 14 21:52:18.979299 kernel: Detected PIPT I-cache on CPU0 Jul 14 21:52:18.979306 kernel: CPU features: detected: GIC system register CPU interface Jul 14 21:52:18.979312 kernel: CPU features: detected: Hardware dirty bit management Jul 14 21:52:18.979319 kernel: CPU features: detected: Spectre-v4 Jul 14 21:52:18.979326 kernel: CPU features: detected: Spectre-BHB Jul 14 21:52:18.979333 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 14 21:52:18.979340 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 14 21:52:18.979348 kernel: CPU features: detected: ARM erratum 1418040 Jul 14 21:52:18.979355 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 14 21:52:18.979361 kernel: alternatives: applying boot alternatives Jul 14 21:52:18.979369 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=219fd31147cccfc1f4834c1854a4109714661cabce52e86d5c93000af393c45b Jul 14 21:52:18.979376 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 14 21:52:18.979383 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 14 21:52:18.979390 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 14 21:52:18.979397 kernel: Fallback order for Node 0: 0 Jul 14 21:52:18.979404 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 14 21:52:18.979411 kernel: Policy zone: DMA Jul 14 21:52:18.979417 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 14 21:52:18.979425 kernel: software IO TLB: area num 4. Jul 14 21:52:18.979432 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 14 21:52:18.979439 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) Jul 14 21:52:18.979447 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 14 21:52:18.979453 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 14 21:52:18.979461 kernel: rcu: RCU event tracing is enabled. Jul 14 21:52:18.979468 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 14 21:52:18.979475 kernel: Trampoline variant of Tasks RCU enabled. Jul 14 21:52:18.979482 kernel: Tracing variant of Tasks RCU enabled. Jul 14 21:52:18.979488 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 14 21:52:18.979495 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 14 21:52:18.979502 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 14 21:52:18.979510 kernel: GICv3: 256 SPIs implemented Jul 14 21:52:18.979517 kernel: GICv3: 0 Extended SPIs implemented Jul 14 21:52:18.979524 kernel: Root IRQ handler: gic_handle_irq Jul 14 21:52:18.979530 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 14 21:52:18.979537 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 14 21:52:18.979544 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 14 21:52:18.979551 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 14 21:52:18.979558 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 14 21:52:18.979565 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 14 21:52:18.979572 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 14 21:52:18.979579 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 14 21:52:18.979586 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:52:18.979593 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 14 21:52:18.979600 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 14 21:52:18.979607 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 14 21:52:18.979614 kernel: arm-pv: using stolen time PV Jul 14 21:52:18.979621 kernel: Console: colour dummy device 80x25 Jul 14 21:52:18.979628 kernel: ACPI: Core revision 20230628 Jul 14 21:52:18.979635 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 14 21:52:18.979642 kernel: pid_max: default: 32768 minimum: 301 Jul 14 21:52:18.979649 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 14 21:52:18.979657 kernel: landlock: Up and running. Jul 14 21:52:18.979664 kernel: SELinux: Initializing. Jul 14 21:52:18.979671 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 21:52:18.979679 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 21:52:18.979686 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 21:52:18.979693 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 21:52:18.979700 kernel: rcu: Hierarchical SRCU implementation. Jul 14 21:52:18.979707 kernel: rcu: Max phase no-delay instances is 400. Jul 14 21:52:18.979714 kernel: Platform MSI: ITS@0x8080000 domain created Jul 14 21:52:18.979722 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 14 21:52:18.979729 kernel: Remapping and enabling EFI services. Jul 14 21:52:18.979736 kernel: smp: Bringing up secondary CPUs ... Jul 14 21:52:18.979743 kernel: Detected PIPT I-cache on CPU1 Jul 14 21:52:18.979750 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 14 21:52:18.979757 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 14 21:52:18.979764 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:52:18.979771 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 14 21:52:18.979778 kernel: Detected PIPT I-cache on CPU2 Jul 14 21:52:18.979785 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 14 21:52:18.979794 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 14 21:52:18.979801 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:52:18.979812 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 14 21:52:18.979821 kernel: Detected PIPT I-cache on CPU3 Jul 14 21:52:18.979829 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 14 21:52:18.979836 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 14 21:52:18.979843 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:52:18.979850 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 14 21:52:18.979858 kernel: smp: Brought up 1 node, 4 CPUs Jul 14 21:52:18.979867 kernel: SMP: Total of 4 processors activated. Jul 14 21:52:18.979877 kernel: CPU features: detected: 32-bit EL0 Support Jul 14 21:52:18.979885 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 14 21:52:18.979896 kernel: CPU features: detected: Common not Private translations Jul 14 21:52:18.979905 kernel: CPU features: detected: CRC32 instructions Jul 14 21:52:18.979915 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 14 21:52:18.979922 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 14 21:52:18.979929 kernel: CPU features: detected: LSE atomic instructions Jul 14 21:52:18.979938 kernel: CPU features: detected: Privileged Access Never Jul 14 21:52:18.979946 kernel: CPU features: detected: RAS Extension Support Jul 14 21:52:18.979953 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 14 21:52:18.979961 kernel: CPU: All CPU(s) started at EL1 Jul 14 21:52:18.979968 kernel: alternatives: applying system-wide alternatives Jul 14 21:52:18.979975 kernel: devtmpfs: initialized Jul 14 21:52:18.979983 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 14 21:52:18.979990 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 14 21:52:18.979998 kernel: pinctrl core: initialized pinctrl subsystem Jul 14 21:52:18.980006 kernel: SMBIOS 3.0.0 present. Jul 14 21:52:18.980020 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 14 21:52:18.980028 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 14 21:52:18.980035 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 14 21:52:18.980043 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 14 21:52:18.980050 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 14 21:52:18.980058 kernel: audit: initializing netlink subsys (disabled) Jul 14 21:52:18.980066 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jul 14 21:52:18.980073 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 14 21:52:18.980083 kernel: cpuidle: using governor menu Jul 14 21:52:18.980090 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 14 21:52:18.980098 kernel: ASID allocator initialised with 32768 entries Jul 14 21:52:18.980105 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 14 21:52:18.980112 kernel: Serial: AMBA PL011 UART driver Jul 14 21:52:18.980120 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 14 21:52:18.980127 kernel: Modules: 0 pages in range for non-PLT usage Jul 14 21:52:18.980135 kernel: Modules: 509008 pages in range for PLT usage Jul 14 21:52:18.980142 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 14 21:52:18.980151 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 14 21:52:18.980159 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 14 21:52:18.980167 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 14 21:52:18.980174 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 14 21:52:18.980182 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 14 21:52:18.980189 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 14 21:52:18.980196 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 14 21:52:18.980204 kernel: ACPI: Added _OSI(Module Device) Jul 14 21:52:18.980211 kernel: ACPI: Added _OSI(Processor Device) Jul 14 21:52:18.980220 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 14 21:52:18.980233 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 14 21:52:18.980240 kernel: ACPI: Interpreter enabled Jul 14 21:52:18.980248 kernel: ACPI: Using GIC for interrupt routing Jul 14 21:52:18.980255 kernel: ACPI: MCFG table detected, 1 entries Jul 14 21:52:18.980263 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 14 21:52:18.980271 kernel: printk: console [ttyAMA0] enabled Jul 14 21:52:18.980278 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 14 21:52:18.980415 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 14 21:52:18.980495 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 14 21:52:18.980562 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 14 21:52:18.980629 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 14 21:52:18.980695 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 14 21:52:18.980705 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 14 21:52:18.980712 kernel: PCI host bridge to bus 0000:00 Jul 14 21:52:18.980785 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 14 21:52:18.980849 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 14 21:52:18.980909 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 14 21:52:18.980968 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 14 21:52:18.981067 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 14 21:52:18.981152 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 14 21:52:18.981222 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 14 21:52:18.981306 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 14 21:52:18.981383 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 14 21:52:18.981452 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 14 21:52:18.981520 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 14 21:52:18.981588 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 14 21:52:18.981652 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 14 21:52:18.981711 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 14 21:52:18.981774 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 14 21:52:18.981784 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 14 21:52:18.981792 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 14 21:52:18.981800 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 14 21:52:18.981807 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 14 21:52:18.981815 kernel: iommu: Default domain type: Translated Jul 14 21:52:18.981822 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 14 21:52:18.981829 kernel: efivars: Registered efivars operations Jul 14 21:52:18.981839 kernel: vgaarb: loaded Jul 14 21:52:18.981846 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 14 21:52:18.981853 kernel: VFS: Disk quotas dquot_6.6.0 Jul 14 21:52:18.981861 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 14 21:52:18.981868 kernel: pnp: PnP ACPI init Jul 14 21:52:18.981939 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 14 21:52:18.981950 kernel: pnp: PnP ACPI: found 1 devices Jul 14 21:52:18.981958 kernel: NET: Registered PF_INET protocol family Jul 14 21:52:18.981965 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 14 21:52:18.981975 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 14 21:52:18.981982 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 14 21:52:18.981990 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 14 21:52:18.981997 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 14 21:52:18.982005 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 14 21:52:18.982027 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 21:52:18.982036 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 21:52:18.982043 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 14 21:52:18.982052 kernel: PCI: CLS 0 bytes, default 64 Jul 14 21:52:18.982060 kernel: kvm [1]: HYP mode not available Jul 14 21:52:18.982067 kernel: Initialise system trusted keyrings Jul 14 21:52:18.982075 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 14 21:52:18.982082 kernel: Key type asymmetric registered Jul 14 21:52:18.982089 kernel: Asymmetric key parser 'x509' registered Jul 14 21:52:18.982096 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 14 21:52:18.982104 kernel: io scheduler mq-deadline registered Jul 14 21:52:18.982111 kernel: io scheduler kyber registered Jul 14 21:52:18.982118 kernel: io scheduler bfq registered Jul 14 21:52:18.982127 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 14 21:52:18.982135 kernel: ACPI: button: Power Button [PWRB] Jul 14 21:52:18.982143 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 14 21:52:18.982216 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 14 21:52:18.982233 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 14 21:52:18.982241 kernel: thunder_xcv, ver 1.0 Jul 14 21:52:18.982248 kernel: thunder_bgx, ver 1.0 Jul 14 21:52:18.982256 kernel: nicpf, ver 1.0 Jul 14 21:52:18.982263 kernel: nicvf, ver 1.0 Jul 14 21:52:18.982349 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 14 21:52:18.982414 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-14T21:52:18 UTC (1752529938) Jul 14 21:52:18.982424 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 14 21:52:18.982431 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 14 21:52:18.982439 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 14 21:52:18.982446 kernel: watchdog: Hard watchdog permanently disabled Jul 14 21:52:18.982454 kernel: NET: Registered PF_INET6 protocol family Jul 14 21:52:18.982461 kernel: Segment Routing with IPv6 Jul 14 21:52:18.982471 kernel: In-situ OAM (IOAM) with IPv6 Jul 14 21:52:18.982479 kernel: NET: Registered PF_PACKET protocol family Jul 14 21:52:18.982486 kernel: Key type dns_resolver registered Jul 14 21:52:18.982494 kernel: registered taskstats version 1 Jul 14 21:52:18.982501 kernel: Loading compiled-in X.509 certificates Jul 14 21:52:18.982508 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.97-flatcar: 0878f879bf0f15203fd920e9f7d6346db298c301' Jul 14 21:52:18.982516 kernel: Key type .fscrypt registered Jul 14 21:52:18.982523 kernel: Key type fscrypt-provisioning registered Jul 14 21:52:18.982531 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 14 21:52:18.982540 kernel: ima: Allocated hash algorithm: sha1 Jul 14 21:52:18.982547 kernel: ima: No architecture policies found Jul 14 21:52:18.982554 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 14 21:52:18.982562 kernel: clk: Disabling unused clocks Jul 14 21:52:18.982569 kernel: Freeing unused kernel memory: 39424K Jul 14 21:52:18.982577 kernel: Run /init as init process Jul 14 21:52:18.982584 kernel: with arguments: Jul 14 21:52:18.982591 kernel: /init Jul 14 21:52:18.982598 kernel: with environment: Jul 14 21:52:18.982607 kernel: HOME=/ Jul 14 21:52:18.982614 kernel: TERM=linux Jul 14 21:52:18.982621 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 14 21:52:18.982630 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 14 21:52:18.982640 systemd[1]: Detected virtualization kvm. Jul 14 21:52:18.982648 systemd[1]: Detected architecture arm64. Jul 14 21:52:18.982656 systemd[1]: Running in initrd. Jul 14 21:52:18.982665 systemd[1]: No hostname configured, using default hostname. Jul 14 21:52:18.982672 systemd[1]: Hostname set to . Jul 14 21:52:18.982680 systemd[1]: Initializing machine ID from VM UUID. Jul 14 21:52:18.982688 systemd[1]: Queued start job for default target initrd.target. Jul 14 21:52:18.982696 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 21:52:18.982704 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 21:52:18.982712 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 14 21:52:18.982721 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 21:52:18.982730 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 14 21:52:18.982738 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 14 21:52:18.982748 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 14 21:52:18.982756 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 14 21:52:18.982764 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 21:52:18.982772 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 21:52:18.982780 systemd[1]: Reached target paths.target - Path Units. Jul 14 21:52:18.982789 systemd[1]: Reached target slices.target - Slice Units. Jul 14 21:52:18.982797 systemd[1]: Reached target swap.target - Swaps. Jul 14 21:52:18.982805 systemd[1]: Reached target timers.target - Timer Units. Jul 14 21:52:18.982813 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 21:52:18.982821 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 21:52:18.982829 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 14 21:52:18.982837 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 14 21:52:18.982845 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 21:52:18.982853 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 21:52:18.982862 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 21:52:18.982871 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 21:52:18.982878 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 14 21:52:18.982886 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 21:52:18.982894 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 14 21:52:18.982902 systemd[1]: Starting systemd-fsck-usr.service... Jul 14 21:52:18.982910 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 21:52:18.982918 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 21:52:18.982928 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 21:52:18.982935 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 14 21:52:18.982943 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 21:52:18.982951 systemd[1]: Finished systemd-fsck-usr.service. Jul 14 21:52:18.982960 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 21:52:18.982985 systemd-journald[238]: Collecting audit messages is disabled. Jul 14 21:52:18.983005 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 21:52:18.983091 systemd-journald[238]: Journal started Jul 14 21:52:18.983115 systemd-journald[238]: Runtime Journal (/run/log/journal/feeccba996ab4fc791c8284c03836314) is 5.9M, max 47.3M, 41.4M free. Jul 14 21:52:18.978063 systemd-modules-load[239]: Inserted module 'overlay' Jul 14 21:52:18.986352 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 21:52:18.988934 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 21:52:18.989403 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:52:18.993575 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 14 21:52:18.993595 kernel: Bridge firewalling registered Jul 14 21:52:18.993511 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 21:52:18.993861 systemd-modules-load[239]: Inserted module 'br_netfilter' Jul 14 21:52:18.995393 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 21:52:19.000201 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 21:52:19.001960 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 21:52:19.004559 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 21:52:19.012543 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 21:52:19.016232 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 21:52:19.025165 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 21:52:19.026423 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 21:52:19.029222 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 14 21:52:19.042046 dracut-cmdline[283]: dracut-dracut-053 Jul 14 21:52:19.044528 dracut-cmdline[283]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=219fd31147cccfc1f4834c1854a4109714661cabce52e86d5c93000af393c45b Jul 14 21:52:19.056850 systemd-resolved[278]: Positive Trust Anchors: Jul 14 21:52:19.056866 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 21:52:19.056899 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 21:52:19.061836 systemd-resolved[278]: Defaulting to hostname 'linux'. Jul 14 21:52:19.062836 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 21:52:19.066758 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 21:52:19.114040 kernel: SCSI subsystem initialized Jul 14 21:52:19.119029 kernel: Loading iSCSI transport class v2.0-870. Jul 14 21:52:19.127035 kernel: iscsi: registered transport (tcp) Jul 14 21:52:19.140237 kernel: iscsi: registered transport (qla4xxx) Jul 14 21:52:19.140295 kernel: QLogic iSCSI HBA Driver Jul 14 21:52:19.186573 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 14 21:52:19.200211 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 14 21:52:19.218529 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 14 21:52:19.218590 kernel: device-mapper: uevent: version 1.0.3 Jul 14 21:52:19.219626 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 14 21:52:19.266045 kernel: raid6: neonx8 gen() 15766 MB/s Jul 14 21:52:19.283281 kernel: raid6: neonx4 gen() 15647 MB/s Jul 14 21:52:19.300053 kernel: raid6: neonx2 gen() 12215 MB/s Jul 14 21:52:19.317036 kernel: raid6: neonx1 gen() 10467 MB/s Jul 14 21:52:19.334037 kernel: raid6: int64x8 gen() 6823 MB/s Jul 14 21:52:19.351040 kernel: raid6: int64x4 gen() 7294 MB/s Jul 14 21:52:19.368040 kernel: raid6: int64x2 gen() 6118 MB/s Jul 14 21:52:19.385229 kernel: raid6: int64x1 gen() 5049 MB/s Jul 14 21:52:19.385259 kernel: raid6: using algorithm neonx8 gen() 15766 MB/s Jul 14 21:52:19.403159 kernel: raid6: .... xor() 11877 MB/s, rmw enabled Jul 14 21:52:19.403174 kernel: raid6: using neon recovery algorithm Jul 14 21:52:19.408038 kernel: xor: measuring software checksum speed Jul 14 21:52:19.409365 kernel: 8regs : 17329 MB/sec Jul 14 21:52:19.409380 kernel: 32regs : 19655 MB/sec Jul 14 21:52:19.410634 kernel: arm64_neon : 26998 MB/sec Jul 14 21:52:19.410648 kernel: xor: using function: arm64_neon (26998 MB/sec) Jul 14 21:52:19.461053 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 14 21:52:19.473080 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 14 21:52:19.486219 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 21:52:19.498852 systemd-udevd[464]: Using default interface naming scheme 'v255'. Jul 14 21:52:19.502148 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 21:52:19.505176 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 14 21:52:19.521082 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Jul 14 21:52:19.547500 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 21:52:19.556191 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 21:52:19.600234 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 21:52:19.606163 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 14 21:52:19.619203 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 14 21:52:19.620652 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 21:52:19.622535 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 21:52:19.626125 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 21:52:19.634180 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 14 21:52:19.646822 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 14 21:52:19.647010 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 14 21:52:19.650688 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 14 21:52:19.653091 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 14 21:52:19.653113 kernel: GPT:9289727 != 19775487 Jul 14 21:52:19.654028 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 14 21:52:19.654073 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 21:52:19.654207 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 21:52:19.661170 kernel: GPT:9289727 != 19775487 Jul 14 21:52:19.661191 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 14 21:52:19.661202 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:52:19.661094 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 21:52:19.662254 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 21:52:19.662396 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:52:19.664647 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 21:52:19.672397 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 21:52:19.681057 kernel: BTRFS: device fsid a239cc51-2249-4f1a-8861-421a0d84a369 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (508) Jul 14 21:52:19.682475 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 14 21:52:19.686785 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (521) Jul 14 21:52:19.688376 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:52:19.696465 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 14 21:52:19.703662 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 21:52:19.707650 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 14 21:52:19.708907 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 14 21:52:19.721175 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 14 21:52:19.723092 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 21:52:19.727909 disk-uuid[552]: Primary Header is updated. Jul 14 21:52:19.727909 disk-uuid[552]: Secondary Entries is updated. Jul 14 21:52:19.727909 disk-uuid[552]: Secondary Header is updated. Jul 14 21:52:19.731325 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:52:19.756258 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 21:52:20.751043 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:52:20.751959 disk-uuid[553]: The operation has completed successfully. Jul 14 21:52:20.775275 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 14 21:52:20.775373 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 14 21:52:20.795179 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 14 21:52:20.797991 sh[576]: Success Jul 14 21:52:20.813056 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 14 21:52:20.840345 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 14 21:52:20.850475 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 14 21:52:20.852897 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 14 21:52:20.863600 kernel: BTRFS info (device dm-0): first mount of filesystem a239cc51-2249-4f1a-8861-421a0d84a369 Jul 14 21:52:20.863640 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:52:20.864756 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 14 21:52:20.864773 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 14 21:52:20.866174 kernel: BTRFS info (device dm-0): using free space tree Jul 14 21:52:20.869543 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 14 21:52:20.870885 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 14 21:52:20.880177 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 14 21:52:20.881910 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 14 21:52:20.889799 kernel: BTRFS info (device vda6): first mount of filesystem a813e27e-7b70-4c75-b1e9-ccef805dad93 Jul 14 21:52:20.889842 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:52:20.889853 kernel: BTRFS info (device vda6): using free space tree Jul 14 21:52:20.893208 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 21:52:20.901831 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 14 21:52:20.904024 kernel: BTRFS info (device vda6): last unmount of filesystem a813e27e-7b70-4c75-b1e9-ccef805dad93 Jul 14 21:52:20.909844 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 14 21:52:20.917235 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 14 21:52:20.982139 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 21:52:20.996198 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 21:52:21.015345 ignition[674]: Ignition 2.19.0 Jul 14 21:52:21.015355 ignition[674]: Stage: fetch-offline Jul 14 21:52:21.015391 ignition[674]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:52:21.015400 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:52:21.015606 ignition[674]: parsed url from cmdline: "" Jul 14 21:52:21.015609 ignition[674]: no config URL provided Jul 14 21:52:21.015614 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Jul 14 21:52:21.015621 ignition[674]: no config at "/usr/lib/ignition/user.ign" Jul 14 21:52:21.015647 ignition[674]: op(1): [started] loading QEMU firmware config module Jul 14 21:52:21.015652 ignition[674]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 14 21:52:21.023215 ignition[674]: op(1): [finished] loading QEMU firmware config module Jul 14 21:52:21.025569 systemd-networkd[769]: lo: Link UP Jul 14 21:52:21.025582 systemd-networkd[769]: lo: Gained carrier Jul 14 21:52:21.026536 systemd-networkd[769]: Enumeration completed Jul 14 21:52:21.026635 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 21:52:21.027094 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 21:52:21.027097 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 21:52:21.028205 systemd[1]: Reached target network.target - Network. Jul 14 21:52:21.028339 systemd-networkd[769]: eth0: Link UP Jul 14 21:52:21.028343 systemd-networkd[769]: eth0: Gained carrier Jul 14 21:52:21.028350 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 21:52:21.046060 systemd-networkd[769]: eth0: DHCPv4 address 10.0.0.64/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 21:52:21.073049 ignition[674]: parsing config with SHA512: 30b0a69eee65b8f51d6e943e31c6705e8086f8895152bde86a59482da98829efdb3308b7fe8f02a2de27c66bcc3ada2afed7659867de73e8dd2008c422f9e95d Jul 14 21:52:21.077421 unknown[674]: fetched base config from "system" Jul 14 21:52:21.077430 unknown[674]: fetched user config from "qemu" Jul 14 21:52:21.077853 ignition[674]: fetch-offline: fetch-offline passed Jul 14 21:52:21.079266 ignition[674]: Ignition finished successfully Jul 14 21:52:21.081801 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 21:52:21.083436 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 14 21:52:21.093178 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 14 21:52:21.103967 ignition[776]: Ignition 2.19.0 Jul 14 21:52:21.103977 ignition[776]: Stage: kargs Jul 14 21:52:21.104148 ignition[776]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:52:21.104157 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:52:21.107820 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 14 21:52:21.105112 ignition[776]: kargs: kargs passed Jul 14 21:52:21.105160 ignition[776]: Ignition finished successfully Jul 14 21:52:21.121176 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 14 21:52:21.130245 ignition[784]: Ignition 2.19.0 Jul 14 21:52:21.130256 ignition[784]: Stage: disks Jul 14 21:52:21.130423 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:52:21.130432 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:52:21.131355 ignition[784]: disks: disks passed Jul 14 21:52:21.134052 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 14 21:52:21.131399 ignition[784]: Ignition finished successfully Jul 14 21:52:21.135216 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 14 21:52:21.136924 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 14 21:52:21.138705 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 21:52:21.140557 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 21:52:21.142578 systemd[1]: Reached target basic.target - Basic System. Jul 14 21:52:21.155168 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 14 21:52:21.164615 systemd-fsck[795]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 14 21:52:21.167646 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 14 21:52:21.170546 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 14 21:52:21.216039 kernel: EXT4-fs (vda9): mounted filesystem a9f35e2f-e295-4589-8fb4-4b611a8bb71c r/w with ordered data mode. Quota mode: none. Jul 14 21:52:21.216204 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 14 21:52:21.217428 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 14 21:52:21.231097 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 21:52:21.232738 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 14 21:52:21.234205 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 14 21:52:21.234253 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 14 21:52:21.241335 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (803) Jul 14 21:52:21.234275 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 21:52:21.245848 kernel: BTRFS info (device vda6): first mount of filesystem a813e27e-7b70-4c75-b1e9-ccef805dad93 Jul 14 21:52:21.245869 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:52:21.245879 kernel: BTRFS info (device vda6): using free space tree Jul 14 21:52:21.238519 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 14 21:52:21.240090 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 14 21:52:21.249055 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 21:52:21.250576 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 21:52:21.281783 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Jul 14 21:52:21.285844 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Jul 14 21:52:21.289786 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Jul 14 21:52:21.292658 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Jul 14 21:52:21.359864 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 14 21:52:21.370206 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 14 21:52:21.372517 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 14 21:52:21.377028 kernel: BTRFS info (device vda6): last unmount of filesystem a813e27e-7b70-4c75-b1e9-ccef805dad93 Jul 14 21:52:21.390516 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 14 21:52:21.394438 ignition[916]: INFO : Ignition 2.19.0 Jul 14 21:52:21.394438 ignition[916]: INFO : Stage: mount Jul 14 21:52:21.395934 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:52:21.395934 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:52:21.395934 ignition[916]: INFO : mount: mount passed Jul 14 21:52:21.395934 ignition[916]: INFO : Ignition finished successfully Jul 14 21:52:21.396960 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 14 21:52:21.409140 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 14 21:52:21.862261 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 14 21:52:21.872312 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 21:52:21.878916 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (929) Jul 14 21:52:21.878951 kernel: BTRFS info (device vda6): first mount of filesystem a813e27e-7b70-4c75-b1e9-ccef805dad93 Jul 14 21:52:21.878962 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:52:21.880533 kernel: BTRFS info (device vda6): using free space tree Jul 14 21:52:21.883029 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 21:52:21.883961 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 21:52:21.899080 ignition[946]: INFO : Ignition 2.19.0 Jul 14 21:52:21.899080 ignition[946]: INFO : Stage: files Jul 14 21:52:21.900952 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:52:21.900952 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:52:21.900952 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Jul 14 21:52:21.904966 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 14 21:52:21.904966 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 14 21:52:21.904966 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 14 21:52:21.904966 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 14 21:52:21.904966 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 14 21:52:21.904093 unknown[946]: wrote ssh authorized keys file for user: core Jul 14 21:52:21.913407 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 14 21:52:21.913407 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 14 21:52:21.913407 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 14 21:52:21.913407 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 14 21:52:21.942529 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 14 21:52:22.281149 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 14 21:52:22.283149 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 14 21:52:22.283149 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 14 21:52:22.283149 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 14 21:52:22.283149 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 14 21:52:22.283149 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 21:52:22.283149 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 21:52:22.283149 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 21:52:22.283149 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 21:52:22.283149 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 21:52:22.283149 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 21:52:22.283149 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 14 21:52:22.283149 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 14 21:52:22.283149 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 14 21:52:22.283149 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 14 21:52:22.301382 systemd-networkd[769]: eth0: Gained IPv6LL Jul 14 21:52:22.652358 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 14 21:52:23.073463 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 14 21:52:23.073463 ignition[946]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 14 21:52:23.077075 ignition[946]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 14 21:52:23.077075 ignition[946]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 14 21:52:23.077075 ignition[946]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 14 21:52:23.077075 ignition[946]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 14 21:52:23.077075 ignition[946]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 21:52:23.077075 ignition[946]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 21:52:23.077075 ignition[946]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 14 21:52:23.077075 ignition[946]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jul 14 21:52:23.077075 ignition[946]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 21:52:23.077075 ignition[946]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 21:52:23.077075 ignition[946]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jul 14 21:52:23.077075 ignition[946]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jul 14 21:52:23.098323 ignition[946]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 21:52:23.101460 ignition[946]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 21:52:23.104026 ignition[946]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jul 14 21:52:23.104026 ignition[946]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jul 14 21:52:23.104026 ignition[946]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jul 14 21:52:23.104026 ignition[946]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 14 21:52:23.104026 ignition[946]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 14 21:52:23.104026 ignition[946]: INFO : files: files passed Jul 14 21:52:23.104026 ignition[946]: INFO : Ignition finished successfully Jul 14 21:52:23.104378 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 14 21:52:23.118190 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 14 21:52:23.121400 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 14 21:52:23.122684 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 14 21:52:23.122780 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 14 21:52:23.129033 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory Jul 14 21:52:23.131343 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 21:52:23.131343 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 14 21:52:23.135719 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 21:52:23.133058 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 21:52:23.134796 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 14 21:52:23.141330 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 14 21:52:23.161782 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 14 21:52:23.161886 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 14 21:52:23.164063 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 14 21:52:23.165882 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 14 21:52:23.167658 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 14 21:52:23.169081 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 14 21:52:23.184741 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 21:52:23.193191 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 14 21:52:23.200519 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 14 21:52:23.201737 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 21:52:23.203784 systemd[1]: Stopped target timers.target - Timer Units. Jul 14 21:52:23.205613 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 14 21:52:23.205727 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 21:52:23.208271 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 14 21:52:23.210218 systemd[1]: Stopped target basic.target - Basic System. Jul 14 21:52:23.211804 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 14 21:52:23.213498 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 21:52:23.215399 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 14 21:52:23.217376 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 14 21:52:23.219156 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 21:52:23.221035 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 14 21:52:23.223028 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 14 21:52:23.224773 systemd[1]: Stopped target swap.target - Swaps. Jul 14 21:52:23.226273 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 14 21:52:23.226395 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 14 21:52:23.228702 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 14 21:52:23.230640 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 21:52:23.232535 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 14 21:52:23.236087 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 21:52:23.237312 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 14 21:52:23.237416 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 14 21:52:23.240196 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 14 21:52:23.240318 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 21:52:23.242279 systemd[1]: Stopped target paths.target - Path Units. Jul 14 21:52:23.243833 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 14 21:52:23.247081 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 21:52:23.248337 systemd[1]: Stopped target slices.target - Slice Units. Jul 14 21:52:23.250381 systemd[1]: Stopped target sockets.target - Socket Units. Jul 14 21:52:23.251953 systemd[1]: iscsid.socket: Deactivated successfully. Jul 14 21:52:23.252062 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 21:52:23.253584 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 14 21:52:23.253663 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 21:52:23.255186 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 14 21:52:23.255293 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 21:52:23.257062 systemd[1]: ignition-files.service: Deactivated successfully. Jul 14 21:52:23.257162 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 14 21:52:23.269176 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 14 21:52:23.270082 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 14 21:52:23.270207 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 21:52:23.275236 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 14 21:52:23.276096 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 14 21:52:23.276229 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 21:52:23.278926 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 14 21:52:23.279047 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 21:52:23.287204 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 14 21:52:23.288077 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 14 21:52:23.291142 ignition[1000]: INFO : Ignition 2.19.0 Jul 14 21:52:23.291142 ignition[1000]: INFO : Stage: umount Jul 14 21:52:23.292922 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:52:23.292922 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:52:23.292922 ignition[1000]: INFO : umount: umount passed Jul 14 21:52:23.292922 ignition[1000]: INFO : Ignition finished successfully Jul 14 21:52:23.295227 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 14 21:52:23.295845 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 14 21:52:23.295944 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 14 21:52:23.298431 systemd[1]: Stopped target network.target - Network. Jul 14 21:52:23.299547 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 14 21:52:23.299618 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 14 21:52:23.303539 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 14 21:52:23.303592 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 14 21:52:23.305201 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 14 21:52:23.305256 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 14 21:52:23.306879 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 14 21:52:23.306924 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 14 21:52:23.308801 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 14 21:52:23.311234 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 14 21:52:23.320314 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 14 21:52:23.320424 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 14 21:52:23.324088 systemd-networkd[769]: eth0: DHCPv6 lease lost Jul 14 21:52:23.325700 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 14 21:52:23.325764 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 21:52:23.328737 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 14 21:52:23.330066 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 14 21:52:23.331338 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 14 21:52:23.331370 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 14 21:52:23.342179 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 14 21:52:23.343197 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 14 21:52:23.343282 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 21:52:23.345695 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 21:52:23.345742 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 14 21:52:23.347685 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 14 21:52:23.347729 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 14 21:52:23.350711 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 21:52:23.354410 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 14 21:52:23.354496 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 14 21:52:23.357271 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 14 21:52:23.357363 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 14 21:52:23.364567 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 14 21:52:23.364674 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 14 21:52:23.374673 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 14 21:52:23.374808 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 21:52:23.376970 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 14 21:52:23.377009 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 14 21:52:23.378823 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 14 21:52:23.378856 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 21:52:23.380664 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 14 21:52:23.380714 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 14 21:52:23.383337 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 14 21:52:23.383384 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 14 21:52:23.385965 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 21:52:23.386026 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 21:52:23.399244 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 14 21:52:23.400274 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 14 21:52:23.400334 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 21:52:23.402397 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 21:52:23.402442 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:52:23.406278 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 14 21:52:23.407102 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 14 21:52:23.408882 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 14 21:52:23.411506 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 14 21:52:23.425326 systemd[1]: Switching root. Jul 14 21:52:23.451311 systemd-journald[238]: Journal stopped Jul 14 21:52:24.238245 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jul 14 21:52:24.238303 kernel: SELinux: policy capability network_peer_controls=1 Jul 14 21:52:24.238316 kernel: SELinux: policy capability open_perms=1 Jul 14 21:52:24.238326 kernel: SELinux: policy capability extended_socket_class=1 Jul 14 21:52:24.238339 kernel: SELinux: policy capability always_check_network=0 Jul 14 21:52:24.238352 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 14 21:52:24.238362 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 14 21:52:24.238372 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 14 21:52:24.238382 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 14 21:52:24.238392 systemd[1]: Successfully loaded SELinux policy in 31.868ms. Jul 14 21:52:24.238409 kernel: audit: type=1403 audit(1752529943.645:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 14 21:52:24.238420 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.289ms. Jul 14 21:52:24.238432 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 14 21:52:24.238443 systemd[1]: Detected virtualization kvm. Jul 14 21:52:24.238455 systemd[1]: Detected architecture arm64. Jul 14 21:52:24.238469 systemd[1]: Detected first boot. Jul 14 21:52:24.238480 systemd[1]: Initializing machine ID from VM UUID. Jul 14 21:52:24.238491 zram_generator::config[1062]: No configuration found. Jul 14 21:52:24.238502 systemd[1]: Populated /etc with preset unit settings. Jul 14 21:52:24.238513 systemd[1]: Queued start job for default target multi-user.target. Jul 14 21:52:24.238528 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 14 21:52:24.238539 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 14 21:52:24.238552 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 14 21:52:24.238563 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 14 21:52:24.238575 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 14 21:52:24.238586 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 14 21:52:24.238597 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 14 21:52:24.238608 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 14 21:52:24.238619 systemd[1]: Created slice user.slice - User and Session Slice. Jul 14 21:52:24.238630 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 21:52:24.238641 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 21:52:24.238654 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 14 21:52:24.238667 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 14 21:52:24.238678 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 14 21:52:24.238689 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 21:52:24.238700 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 14 21:52:24.238711 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 21:52:24.238722 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 14 21:52:24.238733 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 21:52:24.238743 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 21:52:24.238756 systemd[1]: Reached target slices.target - Slice Units. Jul 14 21:52:24.238767 systemd[1]: Reached target swap.target - Swaps. Jul 14 21:52:24.238777 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 14 21:52:24.238788 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 14 21:52:24.238799 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 14 21:52:24.238811 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 14 21:52:24.238821 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 21:52:24.238832 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 21:52:24.238845 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 21:52:24.238856 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 14 21:52:24.238866 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 14 21:52:24.238877 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 14 21:52:24.238888 systemd[1]: Mounting media.mount - External Media Directory... Jul 14 21:52:24.238900 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 14 21:52:24.238911 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 14 21:52:24.238922 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 14 21:52:24.238933 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 14 21:52:24.238945 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 21:52:24.238956 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 21:52:24.238967 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 14 21:52:24.238978 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 21:52:24.238988 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 21:52:24.238999 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 21:52:24.239010 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 14 21:52:24.239032 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 21:52:24.239045 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 14 21:52:24.239057 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 14 21:52:24.239068 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 14 21:52:24.239078 kernel: fuse: init (API version 7.39) Jul 14 21:52:24.239088 kernel: loop: module loaded Jul 14 21:52:24.239098 kernel: ACPI: bus type drm_connector registered Jul 14 21:52:24.239108 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 21:52:24.239119 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 21:52:24.239130 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 14 21:52:24.239143 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 14 21:52:24.239154 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 21:52:24.239196 systemd-journald[1141]: Collecting audit messages is disabled. Jul 14 21:52:24.239225 systemd-journald[1141]: Journal started Jul 14 21:52:24.239248 systemd-journald[1141]: Runtime Journal (/run/log/journal/feeccba996ab4fc791c8284c03836314) is 5.9M, max 47.3M, 41.4M free. Jul 14 21:52:24.241944 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 21:52:24.243001 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 14 21:52:24.244146 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 14 21:52:24.245365 systemd[1]: Mounted media.mount - External Media Directory. Jul 14 21:52:24.246456 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 14 21:52:24.247686 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 14 21:52:24.248890 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 14 21:52:24.250193 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 14 21:52:24.251672 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 21:52:24.253220 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 14 21:52:24.253390 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 14 21:52:24.254774 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:52:24.254939 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 21:52:24.256331 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 21:52:24.256487 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 21:52:24.258001 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:52:24.258180 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 21:52:24.259608 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 14 21:52:24.259774 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 14 21:52:24.261320 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:52:24.261553 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 21:52:24.263072 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 21:52:24.264506 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 14 21:52:24.266258 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 14 21:52:24.278153 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 14 21:52:24.286183 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 14 21:52:24.288605 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 14 21:52:24.289915 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 14 21:52:24.292180 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 14 21:52:24.294306 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 14 21:52:24.295405 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 21:52:24.298172 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 14 21:52:24.299355 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 21:52:24.300418 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 21:52:24.307006 systemd-journald[1141]: Time spent on flushing to /var/log/journal/feeccba996ab4fc791c8284c03836314 is 20.928ms for 841 entries. Jul 14 21:52:24.307006 systemd-journald[1141]: System Journal (/var/log/journal/feeccba996ab4fc791c8284c03836314) is 8.0M, max 195.6M, 187.6M free. Jul 14 21:52:24.338049 systemd-journald[1141]: Received client request to flush runtime journal. Jul 14 21:52:24.307358 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 21:52:24.311382 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 21:52:24.312889 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 14 21:52:24.314394 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 14 21:52:24.315953 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 14 21:52:24.319344 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 14 21:52:24.328280 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 14 21:52:24.334053 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Jul 14 21:52:24.334063 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Jul 14 21:52:24.338272 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 21:52:24.348481 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 14 21:52:24.350124 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 14 21:52:24.352234 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 21:52:24.354503 udevadm[1205]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 14 21:52:24.369082 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 14 21:52:24.381192 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 21:52:24.394198 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jul 14 21:52:24.394229 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jul 14 21:52:24.397942 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 21:52:24.776189 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 14 21:52:24.788171 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 21:52:24.808187 systemd-udevd[1224]: Using default interface naming scheme 'v255'. Jul 14 21:52:24.825075 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 21:52:24.835442 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 21:52:24.855224 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 14 21:52:24.856804 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jul 14 21:52:24.869147 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1230) Jul 14 21:52:24.919139 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 21:52:24.930341 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 14 21:52:24.952783 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 21:52:24.962450 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 14 21:52:24.975346 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 14 21:52:24.986290 lvm[1261]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 21:52:25.000044 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:52:25.008807 systemd-networkd[1232]: lo: Link UP Jul 14 21:52:25.008816 systemd-networkd[1232]: lo: Gained carrier Jul 14 21:52:25.009568 systemd-networkd[1232]: Enumeration completed Jul 14 21:52:25.009695 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 21:52:25.011920 systemd-networkd[1232]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 21:52:25.011931 systemd-networkd[1232]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 21:52:25.012531 systemd-networkd[1232]: eth0: Link UP Jul 14 21:52:25.012543 systemd-networkd[1232]: eth0: Gained carrier Jul 14 21:52:25.012557 systemd-networkd[1232]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 21:52:25.018192 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 14 21:52:25.019772 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 14 21:52:25.021830 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 21:52:25.024722 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 14 21:52:25.032072 lvm[1270]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 21:52:25.038069 systemd-networkd[1232]: eth0: DHCPv4 address 10.0.0.64/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 21:52:25.065518 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 14 21:52:25.067045 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 14 21:52:25.068321 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 14 21:52:25.068358 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 21:52:25.069376 systemd[1]: Reached target machines.target - Containers. Jul 14 21:52:25.071358 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 14 21:52:25.088158 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 14 21:52:25.090385 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 14 21:52:25.091589 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 21:52:25.093178 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 14 21:52:25.096371 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 14 21:52:25.101543 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 14 21:52:25.103437 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 14 21:52:25.111053 kernel: loop0: detected capacity change from 0 to 114432 Jul 14 21:52:25.111776 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 14 21:52:25.123000 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 14 21:52:25.123757 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 14 21:52:25.130052 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 14 21:52:25.151045 kernel: loop1: detected capacity change from 0 to 114328 Jul 14 21:52:25.190052 kernel: loop2: detected capacity change from 0 to 203944 Jul 14 21:52:25.226044 kernel: loop3: detected capacity change from 0 to 114432 Jul 14 21:52:25.233046 kernel: loop4: detected capacity change from 0 to 114328 Jul 14 21:52:25.237059 kernel: loop5: detected capacity change from 0 to 203944 Jul 14 21:52:25.242113 (sd-merge)[1291]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 14 21:52:25.242521 (sd-merge)[1291]: Merged extensions into '/usr'. Jul 14 21:52:25.245709 systemd[1]: Reloading requested from client PID 1278 ('systemd-sysext') (unit systemd-sysext.service)... Jul 14 21:52:25.245724 systemd[1]: Reloading... Jul 14 21:52:25.289089 zram_generator::config[1316]: No configuration found. Jul 14 21:52:25.351262 ldconfig[1274]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 14 21:52:25.389216 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:52:25.433570 systemd[1]: Reloading finished in 187 ms. Jul 14 21:52:25.450795 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 14 21:52:25.452282 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 14 21:52:25.469153 systemd[1]: Starting ensure-sysext.service... Jul 14 21:52:25.471119 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 21:52:25.476492 systemd[1]: Reloading requested from client PID 1360 ('systemctl') (unit ensure-sysext.service)... Jul 14 21:52:25.476509 systemd[1]: Reloading... Jul 14 21:52:25.488203 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 14 21:52:25.488482 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 14 21:52:25.489127 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 14 21:52:25.489356 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Jul 14 21:52:25.489410 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Jul 14 21:52:25.491978 systemd-tmpfiles[1361]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 21:52:25.491994 systemd-tmpfiles[1361]: Skipping /boot Jul 14 21:52:25.498959 systemd-tmpfiles[1361]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 21:52:25.498990 systemd-tmpfiles[1361]: Skipping /boot Jul 14 21:52:25.524103 zram_generator::config[1390]: No configuration found. Jul 14 21:52:25.610157 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:52:25.655039 systemd[1]: Reloading finished in 178 ms. Jul 14 21:52:25.670005 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 21:52:25.686153 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 14 21:52:25.688974 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 14 21:52:25.691923 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 14 21:52:25.695280 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 21:52:25.698260 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 14 21:52:25.704189 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 21:52:25.714355 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 21:52:25.719901 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 21:52:25.723780 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 21:52:25.726482 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 21:52:25.727459 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 14 21:52:25.729568 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:52:25.729730 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 21:52:25.731618 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:52:25.731768 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 21:52:25.742090 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:52:25.745246 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 21:52:25.745736 augenrules[1462]: No rules Jul 14 21:52:25.747344 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 14 21:52:25.749158 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 14 21:52:25.753745 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 21:52:25.760280 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 21:52:25.762523 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 21:52:25.763817 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 21:52:25.766283 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 14 21:52:25.767494 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 21:52:25.768492 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 14 21:52:25.770373 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:52:25.770530 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 21:52:25.772331 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:52:25.772572 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 21:52:25.780298 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 21:52:25.791293 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 21:52:25.796227 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 21:52:25.797285 systemd-resolved[1437]: Positive Trust Anchors: Jul 14 21:52:25.798623 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 21:52:25.799081 systemd-resolved[1437]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 21:52:25.799116 systemd-resolved[1437]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 21:52:25.802348 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 21:52:25.805373 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 21:52:25.805547 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 21:52:25.806333 systemd[1]: Finished ensure-sysext.service. Jul 14 21:52:25.807790 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 14 21:52:25.809147 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:52:25.809301 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 21:52:25.810722 systemd-resolved[1437]: Defaulting to hostname 'linux'. Jul 14 21:52:25.811239 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 21:52:25.811486 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 21:52:25.812916 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:52:25.813165 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 21:52:25.814821 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:52:25.815277 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 21:52:25.818341 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 21:52:25.822776 systemd[1]: Reached target network.target - Network. Jul 14 21:52:25.824003 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 21:52:25.825314 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 21:52:25.825471 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 21:52:25.836192 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 14 21:52:25.880143 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 14 21:52:25.881191 systemd-timesyncd[1501]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 14 21:52:25.881260 systemd-timesyncd[1501]: Initial clock synchronization to Mon 2025-07-14 21:52:25.567311 UTC. Jul 14 21:52:25.881743 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 21:52:25.882943 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 14 21:52:25.884194 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 14 21:52:25.885421 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 14 21:52:25.886753 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 14 21:52:25.886795 systemd[1]: Reached target paths.target - Path Units. Jul 14 21:52:25.887745 systemd[1]: Reached target time-set.target - System Time Set. Jul 14 21:52:25.888916 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 14 21:52:25.890129 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 14 21:52:25.891380 systemd[1]: Reached target timers.target - Timer Units. Jul 14 21:52:25.893039 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 14 21:52:25.895610 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 14 21:52:25.897845 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 14 21:52:25.903164 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 14 21:52:25.904284 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 21:52:25.905310 systemd[1]: Reached target basic.target - Basic System. Jul 14 21:52:25.906399 systemd[1]: System is tainted: cgroupsv1 Jul 14 21:52:25.906450 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 14 21:52:25.906471 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 14 21:52:25.907633 systemd[1]: Starting containerd.service - containerd container runtime... Jul 14 21:52:25.909716 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 14 21:52:25.911689 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 14 21:52:25.916196 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 14 21:52:25.917177 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 14 21:52:25.918263 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 14 21:52:25.923771 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 14 21:52:25.931193 jq[1507]: false Jul 14 21:52:25.939218 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 14 21:52:25.945054 extend-filesystems[1509]: Found loop3 Jul 14 21:52:25.945054 extend-filesystems[1509]: Found loop4 Jul 14 21:52:25.945054 extend-filesystems[1509]: Found loop5 Jul 14 21:52:25.945054 extend-filesystems[1509]: Found vda Jul 14 21:52:25.945054 extend-filesystems[1509]: Found vda1 Jul 14 21:52:25.945054 extend-filesystems[1509]: Found vda2 Jul 14 21:52:25.945054 extend-filesystems[1509]: Found vda3 Jul 14 21:52:25.945054 extend-filesystems[1509]: Found usr Jul 14 21:52:25.945054 extend-filesystems[1509]: Found vda4 Jul 14 21:52:25.945054 extend-filesystems[1509]: Found vda6 Jul 14 21:52:25.945054 extend-filesystems[1509]: Found vda7 Jul 14 21:52:25.945054 extend-filesystems[1509]: Found vda9 Jul 14 21:52:25.945054 extend-filesystems[1509]: Checking size of /dev/vda9 Jul 14 21:52:25.995860 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1241) Jul 14 21:52:25.995900 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 14 21:52:25.943151 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 14 21:52:25.952458 dbus-daemon[1506]: [system] SELinux support is enabled Jul 14 21:52:25.996437 extend-filesystems[1509]: Resized partition /dev/vda9 Jul 14 21:52:25.948222 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 14 21:52:25.997944 extend-filesystems[1533]: resize2fs 1.47.1 (20-May-2024) Jul 14 21:52:25.955759 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 14 21:52:25.960199 systemd[1]: Starting update-engine.service - Update Engine... Jul 14 21:52:25.962821 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 14 21:52:26.003490 jq[1529]: true Jul 14 21:52:25.970755 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 14 21:52:25.974356 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 14 21:52:25.974580 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 14 21:52:25.974825 systemd[1]: motdgen.service: Deactivated successfully. Jul 14 21:52:25.975010 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 14 21:52:25.979449 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 14 21:52:25.979662 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 14 21:52:26.001678 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 14 21:52:26.001699 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 14 21:52:26.007837 (ntainerd)[1545]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 14 21:52:26.008747 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 14 21:52:26.008768 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 14 21:52:26.014813 jq[1537]: true Jul 14 21:52:26.025296 update_engine[1527]: I20250714 21:52:26.025074 1527 main.cc:92] Flatcar Update Engine starting Jul 14 21:52:26.034407 update_engine[1527]: I20250714 21:52:26.028975 1527 update_check_scheduler.cc:74] Next update check in 2m59s Jul 14 21:52:26.034474 tar[1536]: linux-arm64/helm Jul 14 21:52:26.031503 systemd[1]: Started update-engine.service - Update Engine. Jul 14 21:52:26.037467 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 14 21:52:26.041048 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 14 21:52:26.038358 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 14 21:52:26.066117 extend-filesystems[1533]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 14 21:52:26.066117 extend-filesystems[1533]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 14 21:52:26.066117 extend-filesystems[1533]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 14 21:52:26.068465 systemd-logind[1524]: Watching system buttons on /dev/input/event0 (Power Button) Jul 14 21:52:26.084230 extend-filesystems[1509]: Resized filesystem in /dev/vda9 Jul 14 21:52:26.068818 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 14 21:52:26.069065 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 14 21:52:26.070328 systemd-logind[1524]: New seat seat0. Jul 14 21:52:26.074234 systemd[1]: Started systemd-logind.service - User Login Management. Jul 14 21:52:26.100559 bash[1570]: Updated "/home/core/.ssh/authorized_keys" Jul 14 21:52:26.101953 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 14 21:52:26.103816 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 14 21:52:26.109248 locksmithd[1553]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 14 21:52:26.253840 containerd[1545]: time="2025-07-14T21:52:26.253736859Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 14 21:52:26.281304 containerd[1545]: time="2025-07-14T21:52:26.281248657Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:52:26.282650 containerd[1545]: time="2025-07-14T21:52:26.282608711Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.97-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:52:26.282650 containerd[1545]: time="2025-07-14T21:52:26.282649486Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 14 21:52:26.282718 containerd[1545]: time="2025-07-14T21:52:26.282667511Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 14 21:52:26.282861 containerd[1545]: time="2025-07-14T21:52:26.282836994Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 14 21:52:26.282889 containerd[1545]: time="2025-07-14T21:52:26.282862935Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 14 21:52:26.282936 containerd[1545]: time="2025-07-14T21:52:26.282919199Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:52:26.282966 containerd[1545]: time="2025-07-14T21:52:26.282934764Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:52:26.283160 containerd[1545]: time="2025-07-14T21:52:26.283137375Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:52:26.283186 containerd[1545]: time="2025-07-14T21:52:26.283159665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 14 21:52:26.283186 containerd[1545]: time="2025-07-14T21:52:26.283173078Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:52:26.283186 containerd[1545]: time="2025-07-14T21:52:26.283182532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 14 21:52:26.283270 containerd[1545]: time="2025-07-14T21:52:26.283252477Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:52:26.283456 containerd[1545]: time="2025-07-14T21:52:26.283434912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:52:26.283594 containerd[1545]: time="2025-07-14T21:52:26.283559968Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:52:26.283594 containerd[1545]: time="2025-07-14T21:52:26.283579645Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 14 21:52:26.283686 containerd[1545]: time="2025-07-14T21:52:26.283660697Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 14 21:52:26.283736 containerd[1545]: time="2025-07-14T21:52:26.283723225Z" level=info msg="metadata content store policy set" policy=shared Jul 14 21:52:26.287593 containerd[1545]: time="2025-07-14T21:52:26.287515535Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 14 21:52:26.287593 containerd[1545]: time="2025-07-14T21:52:26.287575949Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 14 21:52:26.287593 containerd[1545]: time="2025-07-14T21:52:26.287592936Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 14 21:52:26.287703 containerd[1545]: time="2025-07-14T21:52:26.287608885Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 14 21:52:26.287703 containerd[1545]: time="2025-07-14T21:52:26.287625218Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 14 21:52:26.287823 containerd[1545]: time="2025-07-14T21:52:26.287758807Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 14 21:52:26.289907 containerd[1545]: time="2025-07-14T21:52:26.288447615Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 14 21:52:26.289907 containerd[1545]: time="2025-07-14T21:52:26.288616675Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 14 21:52:26.289907 containerd[1545]: time="2025-07-14T21:52:26.288634892Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 14 21:52:26.289907 containerd[1545]: time="2025-07-14T21:52:26.288649650Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 14 21:52:26.289907 containerd[1545]: time="2025-07-14T21:52:26.288664561Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 14 21:52:26.289907 containerd[1545]: time="2025-07-14T21:52:26.288679127Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 14 21:52:26.289907 containerd[1545]: time="2025-07-14T21:52:26.288692309Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 14 21:52:26.289907 containerd[1545]: time="2025-07-14T21:52:26.288705760Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 14 21:52:26.289907 containerd[1545]: time="2025-07-14T21:52:26.288720172Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 14 21:52:26.289907 containerd[1545]: time="2025-07-14T21:52:26.288737927Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 14 21:52:26.289907 containerd[1545]: time="2025-07-14T21:52:26.288751839Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 14 21:52:26.289907 containerd[1545]: time="2025-07-14T21:52:26.288763791Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 14 21:52:26.289907 containerd[1545]: time="2025-07-14T21:52:26.288782892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.289907 containerd[1545]: time="2025-07-14T21:52:26.288796919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.290249 containerd[1545]: time="2025-07-14T21:52:26.288809871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.290249 containerd[1545]: time="2025-07-14T21:52:26.288821400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.290249 containerd[1545]: time="2025-07-14T21:52:26.288832930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.290249 containerd[1545]: time="2025-07-14T21:52:26.288850186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.290249 containerd[1545]: time="2025-07-14T21:52:26.288862061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.290249 containerd[1545]: time="2025-07-14T21:52:26.288874436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.290249 containerd[1545]: time="2025-07-14T21:52:26.288887234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.290249 containerd[1545]: time="2025-07-14T21:52:26.288900954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.290249 containerd[1545]: time="2025-07-14T21:52:26.288912329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.290249 containerd[1545]: time="2025-07-14T21:52:26.288923743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.290249 containerd[1545]: time="2025-07-14T21:52:26.288935580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.290249 containerd[1545]: time="2025-07-14T21:52:26.288951337Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 14 21:52:26.290249 containerd[1545]: time="2025-07-14T21:52:26.288972436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.290249 containerd[1545]: time="2025-07-14T21:52:26.288997263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.290249 containerd[1545]: time="2025-07-14T21:52:26.289010099Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 14 21:52:26.290511 containerd[1545]: time="2025-07-14T21:52:26.289157599Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 14 21:52:26.290511 containerd[1545]: time="2025-07-14T21:52:26.289175009Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 14 21:52:26.290511 containerd[1545]: time="2025-07-14T21:52:26.289185693Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 14 21:52:26.290511 containerd[1545]: time="2025-07-14T21:52:26.289197453Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 14 21:52:26.290511 containerd[1545]: time="2025-07-14T21:52:26.289206830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.290511 containerd[1545]: time="2025-07-14T21:52:26.289220243Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 14 21:52:26.290511 containerd[1545]: time="2025-07-14T21:52:26.289230004Z" level=info msg="NRI interface is disabled by configuration." Jul 14 21:52:26.290511 containerd[1545]: time="2025-07-14T21:52:26.289240996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 14 21:52:26.290918 containerd[1545]: time="2025-07-14T21:52:26.290828180Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 14 21:52:26.291191 containerd[1545]: time="2025-07-14T21:52:26.291163918Z" level=info msg="Connect containerd service" Jul 14 21:52:26.291298 containerd[1545]: time="2025-07-14T21:52:26.291274524Z" level=info msg="using legacy CRI server" Jul 14 21:52:26.291325 containerd[1545]: time="2025-07-14T21:52:26.291299082Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 14 21:52:26.291451 containerd[1545]: time="2025-07-14T21:52:26.291419603Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 14 21:52:26.292041 sshd_keygen[1544]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 14 21:52:26.299098 containerd[1545]: time="2025-07-14T21:52:26.299051839Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 21:52:26.299648 containerd[1545]: time="2025-07-14T21:52:26.299606714Z" level=info msg="Start subscribing containerd event" Jul 14 21:52:26.299684 containerd[1545]: time="2025-07-14T21:52:26.299672163Z" level=info msg="Start recovering state" Jul 14 21:52:26.299744 containerd[1545]: time="2025-07-14T21:52:26.299723815Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 14 21:52:26.300071 containerd[1545]: time="2025-07-14T21:52:26.299747219Z" level=info msg="Start event monitor" Jul 14 21:52:26.300071 containerd[1545]: time="2025-07-14T21:52:26.299761170Z" level=info msg="Start snapshots syncer" Jul 14 21:52:26.300071 containerd[1545]: time="2025-07-14T21:52:26.299771201Z" level=info msg="Start cni network conf syncer for default" Jul 14 21:52:26.300071 containerd[1545]: time="2025-07-14T21:52:26.299775812Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 14 21:52:26.300071 containerd[1545]: time="2025-07-14T21:52:26.299778503Z" level=info msg="Start streaming server" Jul 14 21:52:26.300071 containerd[1545]: time="2025-07-14T21:52:26.300022504Z" level=info msg="containerd successfully booted in 0.047362s" Jul 14 21:52:26.300136 systemd[1]: Started containerd.service - containerd container runtime. Jul 14 21:52:26.311965 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 14 21:52:26.325265 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 14 21:52:26.332167 systemd[1]: issuegen.service: Deactivated successfully. Jul 14 21:52:26.332411 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 14 21:52:26.334343 systemd-networkd[1232]: eth0: Gained IPv6LL Jul 14 21:52:26.335221 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 14 21:52:26.338730 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 14 21:52:26.340645 systemd[1]: Reached target network-online.target - Network is Online. Jul 14 21:52:26.343367 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 14 21:52:26.348235 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:52:26.350411 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 14 21:52:26.354784 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 14 21:52:26.367433 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 14 21:52:26.372959 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 14 21:52:26.376990 systemd[1]: Reached target getty.target - Login Prompts. Jul 14 21:52:26.386222 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 14 21:52:26.386481 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 14 21:52:26.391824 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 14 21:52:26.394729 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 14 21:52:26.412210 tar[1536]: linux-arm64/LICENSE Jul 14 21:52:26.412364 tar[1536]: linux-arm64/README.md Jul 14 21:52:26.427617 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 14 21:52:26.896174 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:52:26.897705 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 14 21:52:26.900127 (kubelet)[1643]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 21:52:26.903095 systemd[1]: Startup finished in 5.500s (kernel) + 3.292s (userspace) = 8.793s. Jul 14 21:52:27.316583 kubelet[1643]: E0714 21:52:27.316525 1643 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 21:52:27.318950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 21:52:27.319192 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 21:52:31.649598 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 14 21:52:31.660293 systemd[1]: Started sshd@0-10.0.0.64:22-10.0.0.1:42126.service - OpenSSH per-connection server daemon (10.0.0.1:42126). Jul 14 21:52:31.700862 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 42126 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:52:31.702503 sshd[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:52:31.710843 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 14 21:52:31.722201 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 14 21:52:31.724061 systemd-logind[1524]: New session 1 of user core. Jul 14 21:52:31.731305 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 14 21:52:31.733146 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 14 21:52:31.738874 (systemd)[1663]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:52:31.820190 systemd[1663]: Queued start job for default target default.target. Jul 14 21:52:31.820481 systemd[1663]: Created slice app.slice - User Application Slice. Jul 14 21:52:31.820503 systemd[1663]: Reached target paths.target - Paths. Jul 14 21:52:31.820514 systemd[1663]: Reached target timers.target - Timers. Jul 14 21:52:31.831097 systemd[1663]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 14 21:52:31.836905 systemd[1663]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 14 21:52:31.836965 systemd[1663]: Reached target sockets.target - Sockets. Jul 14 21:52:31.836977 systemd[1663]: Reached target basic.target - Basic System. Jul 14 21:52:31.837029 systemd[1663]: Reached target default.target - Main User Target. Jul 14 21:52:31.837053 systemd[1663]: Startup finished in 93ms. Jul 14 21:52:31.837502 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 14 21:52:31.838684 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 14 21:52:31.898828 systemd[1]: Started sshd@1-10.0.0.64:22-10.0.0.1:42130.service - OpenSSH per-connection server daemon (10.0.0.1:42130). Jul 14 21:52:31.927356 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 42130 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:52:31.928425 sshd[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:52:31.932777 systemd-logind[1524]: New session 2 of user core. Jul 14 21:52:31.951307 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 14 21:52:32.001311 sshd[1675]: pam_unix(sshd:session): session closed for user core Jul 14 21:52:32.016219 systemd[1]: Started sshd@2-10.0.0.64:22-10.0.0.1:42136.service - OpenSSH per-connection server daemon (10.0.0.1:42136). Jul 14 21:52:32.016573 systemd[1]: sshd@1-10.0.0.64:22-10.0.0.1:42130.service: Deactivated successfully. Jul 14 21:52:32.018820 systemd[1]: session-2.scope: Deactivated successfully. Jul 14 21:52:32.019249 systemd-logind[1524]: Session 2 logged out. Waiting for processes to exit. Jul 14 21:52:32.020178 systemd-logind[1524]: Removed session 2. Jul 14 21:52:32.044621 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 42136 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:52:32.045752 sshd[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:52:32.049562 systemd-logind[1524]: New session 3 of user core. Jul 14 21:52:32.060412 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 14 21:52:32.106802 sshd[1680]: pam_unix(sshd:session): session closed for user core Jul 14 21:52:32.118269 systemd[1]: Started sshd@3-10.0.0.64:22-10.0.0.1:42152.service - OpenSSH per-connection server daemon (10.0.0.1:42152). Jul 14 21:52:32.119043 systemd[1]: sshd@2-10.0.0.64:22-10.0.0.1:42136.service: Deactivated successfully. Jul 14 21:52:32.120263 systemd[1]: session-3.scope: Deactivated successfully. Jul 14 21:52:32.120915 systemd-logind[1524]: Session 3 logged out. Waiting for processes to exit. Jul 14 21:52:32.122099 systemd-logind[1524]: Removed session 3. Jul 14 21:52:32.148296 sshd[1688]: Accepted publickey for core from 10.0.0.1 port 42152 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:52:32.149517 sshd[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:52:32.153607 systemd-logind[1524]: New session 4 of user core. Jul 14 21:52:32.162251 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 14 21:52:32.214745 sshd[1688]: pam_unix(sshd:session): session closed for user core Jul 14 21:52:32.225325 systemd[1]: Started sshd@4-10.0.0.64:22-10.0.0.1:42156.service - OpenSSH per-connection server daemon (10.0.0.1:42156). Jul 14 21:52:32.225693 systemd[1]: sshd@3-10.0.0.64:22-10.0.0.1:42152.service: Deactivated successfully. Jul 14 21:52:32.227459 systemd-logind[1524]: Session 4 logged out. Waiting for processes to exit. Jul 14 21:52:32.227977 systemd[1]: session-4.scope: Deactivated successfully. Jul 14 21:52:32.229366 systemd-logind[1524]: Removed session 4. Jul 14 21:52:32.255942 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 42156 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:52:32.257137 sshd[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:52:32.262080 systemd-logind[1524]: New session 5 of user core. Jul 14 21:52:32.273327 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 14 21:52:32.333693 sudo[1703]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 14 21:52:32.333976 sudo[1703]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 21:52:32.353876 sudo[1703]: pam_unix(sudo:session): session closed for user root Jul 14 21:52:32.357576 sshd[1696]: pam_unix(sshd:session): session closed for user core Jul 14 21:52:32.366873 systemd[1]: Started sshd@5-10.0.0.64:22-10.0.0.1:42160.service - OpenSSH per-connection server daemon (10.0.0.1:42160). Jul 14 21:52:32.367272 systemd[1]: sshd@4-10.0.0.64:22-10.0.0.1:42156.service: Deactivated successfully. Jul 14 21:52:32.369168 systemd-logind[1524]: Session 5 logged out. Waiting for processes to exit. Jul 14 21:52:32.369714 systemd[1]: session-5.scope: Deactivated successfully. Jul 14 21:52:32.370978 systemd-logind[1524]: Removed session 5. Jul 14 21:52:32.397613 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 42160 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:52:32.398874 sshd[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:52:32.402385 systemd-logind[1524]: New session 6 of user core. Jul 14 21:52:32.415306 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 14 21:52:32.466736 sudo[1713]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 14 21:52:32.467045 sudo[1713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 21:52:32.470161 sudo[1713]: pam_unix(sudo:session): session closed for user root Jul 14 21:52:32.474756 sudo[1712]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 14 21:52:32.475063 sudo[1712]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 21:52:32.497285 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 14 21:52:32.498648 auditctl[1716]: No rules Jul 14 21:52:32.499480 systemd[1]: audit-rules.service: Deactivated successfully. Jul 14 21:52:32.499718 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 14 21:52:32.502312 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 14 21:52:32.532072 augenrules[1735]: No rules Jul 14 21:52:32.533290 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 14 21:52:32.534830 sudo[1712]: pam_unix(sudo:session): session closed for user root Jul 14 21:52:32.536371 sshd[1705]: pam_unix(sshd:session): session closed for user core Jul 14 21:52:32.548208 systemd[1]: Started sshd@6-10.0.0.64:22-10.0.0.1:42162.service - OpenSSH per-connection server daemon (10.0.0.1:42162). Jul 14 21:52:32.548558 systemd[1]: sshd@5-10.0.0.64:22-10.0.0.1:42160.service: Deactivated successfully. Jul 14 21:52:32.550580 systemd-logind[1524]: Session 6 logged out. Waiting for processes to exit. Jul 14 21:52:32.550660 systemd[1]: session-6.scope: Deactivated successfully. Jul 14 21:52:32.551806 systemd-logind[1524]: Removed session 6. Jul 14 21:52:32.579414 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 42162 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:52:32.580591 sshd[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:52:32.584032 systemd-logind[1524]: New session 7 of user core. Jul 14 21:52:32.591225 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 14 21:52:32.639814 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 14 21:52:32.640113 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 21:52:32.947233 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 14 21:52:32.947393 (dockerd)[1766]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 14 21:52:33.221092 dockerd[1766]: time="2025-07-14T21:52:33.219846476Z" level=info msg="Starting up" Jul 14 21:52:33.480369 dockerd[1766]: time="2025-07-14T21:52:33.480266457Z" level=info msg="Loading containers: start." Jul 14 21:52:33.581041 kernel: Initializing XFRM netlink socket Jul 14 21:52:33.643187 systemd-networkd[1232]: docker0: Link UP Jul 14 21:52:33.662924 dockerd[1766]: time="2025-07-14T21:52:33.662876190Z" level=info msg="Loading containers: done." Jul 14 21:52:33.681528 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck555077449-merged.mount: Deactivated successfully. Jul 14 21:52:33.684220 dockerd[1766]: time="2025-07-14T21:52:33.684162179Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 14 21:52:33.684306 dockerd[1766]: time="2025-07-14T21:52:33.684274502Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 14 21:52:33.684402 dockerd[1766]: time="2025-07-14T21:52:33.684376271Z" level=info msg="Daemon has completed initialization" Jul 14 21:52:33.717910 dockerd[1766]: time="2025-07-14T21:52:33.717763017Z" level=info msg="API listen on /run/docker.sock" Jul 14 21:52:33.718005 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 14 21:52:34.263686 containerd[1545]: time="2025-07-14T21:52:34.263647416Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 14 21:52:34.989851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount840028057.mount: Deactivated successfully. Jul 14 21:52:35.796038 containerd[1545]: time="2025-07-14T21:52:35.795978341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:35.796992 containerd[1545]: time="2025-07-14T21:52:35.796767724Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651795" Jul 14 21:52:35.797673 containerd[1545]: time="2025-07-14T21:52:35.797634464Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:35.800486 containerd[1545]: time="2025-07-14T21:52:35.800458929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:35.801865 containerd[1545]: time="2025-07-14T21:52:35.801741152Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 1.538050668s" Jul 14 21:52:35.801865 containerd[1545]: time="2025-07-14T21:52:35.801785938Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 14 21:52:35.806625 containerd[1545]: time="2025-07-14T21:52:35.806585243Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 14 21:52:36.813648 containerd[1545]: time="2025-07-14T21:52:36.813596649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:36.814118 containerd[1545]: time="2025-07-14T21:52:36.814072965Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459679" Jul 14 21:52:36.815063 containerd[1545]: time="2025-07-14T21:52:36.815030742Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:36.818423 containerd[1545]: time="2025-07-14T21:52:36.818387157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:36.819134 containerd[1545]: time="2025-07-14T21:52:36.819100284Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.012474153s" Jul 14 21:52:36.819191 containerd[1545]: time="2025-07-14T21:52:36.819135161Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 14 21:52:36.820784 containerd[1545]: time="2025-07-14T21:52:36.820740627Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 14 21:52:37.399615 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 14 21:52:37.409252 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:52:37.508764 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:52:37.512707 (kubelet)[1981]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 21:52:37.550108 kubelet[1981]: E0714 21:52:37.550042 1981 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 21:52:37.552806 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 21:52:37.552994 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 21:52:37.826228 containerd[1545]: time="2025-07-14T21:52:37.826105288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:37.827172 containerd[1545]: time="2025-07-14T21:52:37.826572750Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125068" Jul 14 21:52:37.827703 containerd[1545]: time="2025-07-14T21:52:37.827679192Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:37.831436 containerd[1545]: time="2025-07-14T21:52:37.831373662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:37.832507 containerd[1545]: time="2025-07-14T21:52:37.832458659Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.011682034s" Jul 14 21:52:37.832507 containerd[1545]: time="2025-07-14T21:52:37.832496752Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 14 21:52:37.833202 containerd[1545]: time="2025-07-14T21:52:37.832986809Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 14 21:52:38.731730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount658051166.mount: Deactivated successfully. Jul 14 21:52:39.165437 containerd[1545]: time="2025-07-14T21:52:39.165371631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:39.165858 containerd[1545]: time="2025-07-14T21:52:39.165821142Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915959" Jul 14 21:52:39.166645 containerd[1545]: time="2025-07-14T21:52:39.166620778Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:39.168715 containerd[1545]: time="2025-07-14T21:52:39.168666622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:39.169368 containerd[1545]: time="2025-07-14T21:52:39.169293340Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.3362576s" Jul 14 21:52:39.169368 containerd[1545]: time="2025-07-14T21:52:39.169326669Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 14 21:52:39.169851 containerd[1545]: time="2025-07-14T21:52:39.169827424Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 14 21:52:39.755288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3281892181.mount: Deactivated successfully. Jul 14 21:52:40.337190 containerd[1545]: time="2025-07-14T21:52:40.337134263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:40.337647 containerd[1545]: time="2025-07-14T21:52:40.337594861Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 14 21:52:40.338516 containerd[1545]: time="2025-07-14T21:52:40.338487749Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:40.342428 containerd[1545]: time="2025-07-14T21:52:40.342363690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:40.343482 containerd[1545]: time="2025-07-14T21:52:40.343443958Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.173582642s" Jul 14 21:52:40.343531 containerd[1545]: time="2025-07-14T21:52:40.343483278Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 14 21:52:40.343894 containerd[1545]: time="2025-07-14T21:52:40.343870642Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 14 21:52:40.914598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1550334711.mount: Deactivated successfully. Jul 14 21:52:40.917729 containerd[1545]: time="2025-07-14T21:52:40.917682970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:40.918091 containerd[1545]: time="2025-07-14T21:52:40.918065244Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 14 21:52:40.918925 containerd[1545]: time="2025-07-14T21:52:40.918882910Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:40.921068 containerd[1545]: time="2025-07-14T21:52:40.921040345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:40.921974 containerd[1545]: time="2025-07-14T21:52:40.921896576Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 577.995162ms" Jul 14 21:52:40.921974 containerd[1545]: time="2025-07-14T21:52:40.921931643Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 14 21:52:40.922565 containerd[1545]: time="2025-07-14T21:52:40.922366517Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 14 21:52:41.445100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1812786626.mount: Deactivated successfully. Jul 14 21:52:42.852541 containerd[1545]: time="2025-07-14T21:52:42.852493001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:42.853570 containerd[1545]: time="2025-07-14T21:52:42.853323299Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Jul 14 21:52:42.854252 containerd[1545]: time="2025-07-14T21:52:42.854216107Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:42.857488 containerd[1545]: time="2025-07-14T21:52:42.857455161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:52:42.858941 containerd[1545]: time="2025-07-14T21:52:42.858907009Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.936507717s" Jul 14 21:52:42.858998 containerd[1545]: time="2025-07-14T21:52:42.858942444Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 14 21:52:47.649664 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 14 21:52:47.658213 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:52:47.804227 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:52:47.807115 (kubelet)[2149]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 21:52:47.840402 kubelet[2149]: E0714 21:52:47.840344 2149 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 21:52:47.842482 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 21:52:47.842631 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 21:52:48.882431 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:52:48.893199 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:52:48.911921 systemd[1]: Reloading requested from client PID 2168 ('systemctl') (unit session-7.scope)... Jul 14 21:52:48.911939 systemd[1]: Reloading... Jul 14 21:52:48.970048 zram_generator::config[2208]: No configuration found. Jul 14 21:52:49.102463 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:52:49.156147 systemd[1]: Reloading finished in 243 ms. Jul 14 21:52:49.198929 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:52:49.203202 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 21:52:49.203466 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:52:49.213243 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:52:49.309080 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:52:49.312740 (kubelet)[2267]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 21:52:49.347533 kubelet[2267]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:52:49.347533 kubelet[2267]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 14 21:52:49.347533 kubelet[2267]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:52:49.347867 kubelet[2267]: I0714 21:52:49.347587 2267 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 21:52:49.944153 kubelet[2267]: I0714 21:52:49.944101 2267 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 14 21:52:49.944153 kubelet[2267]: I0714 21:52:49.944135 2267 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 21:52:49.944396 kubelet[2267]: I0714 21:52:49.944366 2267 server.go:934] "Client rotation is on, will bootstrap in background" Jul 14 21:52:49.971195 kubelet[2267]: E0714 21:52:49.971147 2267 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.64:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:52:49.972039 kubelet[2267]: I0714 21:52:49.971988 2267 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 21:52:49.981514 kubelet[2267]: E0714 21:52:49.981470 2267 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 21:52:49.981514 kubelet[2267]: I0714 21:52:49.981504 2267 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 21:52:49.984898 kubelet[2267]: I0714 21:52:49.984851 2267 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 21:52:49.985831 kubelet[2267]: I0714 21:52:49.985795 2267 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 14 21:52:49.985981 kubelet[2267]: I0714 21:52:49.985939 2267 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 21:52:49.986163 kubelet[2267]: I0714 21:52:49.985975 2267 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 14 21:52:49.986256 kubelet[2267]: I0714 21:52:49.986167 2267 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 21:52:49.986256 kubelet[2267]: I0714 21:52:49.986177 2267 container_manager_linux.go:300] "Creating device plugin manager" Jul 14 21:52:49.986431 kubelet[2267]: I0714 21:52:49.986405 2267 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:52:49.990467 kubelet[2267]: I0714 21:52:49.990172 2267 kubelet.go:408] "Attempting to sync node with API server" Jul 14 21:52:49.990467 kubelet[2267]: I0714 21:52:49.990204 2267 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 21:52:49.990467 kubelet[2267]: I0714 21:52:49.990226 2267 kubelet.go:314] "Adding apiserver pod source" Jul 14 21:52:49.990467 kubelet[2267]: I0714 21:52:49.990301 2267 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 21:52:49.993524 kubelet[2267]: W0714 21:52:49.993390 2267 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jul 14 21:52:49.993524 kubelet[2267]: E0714 21:52:49.993467 2267 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:52:49.993524 kubelet[2267]: W0714 21:52:49.993401 2267 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jul 14 21:52:49.993524 kubelet[2267]: E0714 21:52:49.993494 2267 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:52:49.994319 kubelet[2267]: I0714 21:52:49.994274 2267 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 14 21:52:49.995035 kubelet[2267]: I0714 21:52:49.995009 2267 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 21:52:49.995137 kubelet[2267]: W0714 21:52:49.995125 2267 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 14 21:52:49.996041 kubelet[2267]: I0714 21:52:49.996003 2267 server.go:1274] "Started kubelet" Jul 14 21:52:49.996668 kubelet[2267]: I0714 21:52:49.996466 2267 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 21:52:49.996668 kubelet[2267]: I0714 21:52:49.996547 2267 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 21:52:49.997778 kubelet[2267]: I0714 21:52:49.997085 2267 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 21:52:49.997778 kubelet[2267]: I0714 21:52:49.997737 2267 server.go:449] "Adding debug handlers to kubelet server" Jul 14 21:52:50.000494 kubelet[2267]: I0714 21:52:49.998665 2267 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 21:52:50.000494 kubelet[2267]: I0714 21:52:49.998936 2267 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 14 21:52:50.000494 kubelet[2267]: I0714 21:52:49.999078 2267 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 14 21:52:50.000494 kubelet[2267]: I0714 21:52:49.999161 2267 reconciler.go:26] "Reconciler: start to sync state" Jul 14 21:52:50.000494 kubelet[2267]: I0714 21:52:49.999702 2267 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 21:52:50.000494 kubelet[2267]: W0714 21:52:49.999701 2267 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jul 14 21:52:50.000494 kubelet[2267]: E0714 21:52:49.999742 2267 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:52:50.000494 kubelet[2267]: E0714 21:52:50.000317 2267 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:52:50.001043 kubelet[2267]: I0714 21:52:50.000985 2267 factory.go:221] Registration of the systemd container factory successfully Jul 14 21:52:50.001111 kubelet[2267]: I0714 21:52:50.001094 2267 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 21:52:50.004179 kubelet[2267]: E0714 21:52:50.000897 2267 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.64:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.64:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18523cb8efd61ecb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 21:52:49.995980491 +0000 UTC m=+0.680262941,LastTimestamp:2025-07-14 21:52:49.995980491 +0000 UTC m=+0.680262941,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 21:52:50.004406 kubelet[2267]: E0714 21:52:50.004369 2267 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="200ms" Jul 14 21:52:50.004872 kubelet[2267]: E0714 21:52:50.004838 2267 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 21:52:50.006495 kubelet[2267]: I0714 21:52:50.006465 2267 factory.go:221] Registration of the containerd container factory successfully Jul 14 21:52:50.017422 kubelet[2267]: I0714 21:52:50.017384 2267 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 21:52:50.019127 kubelet[2267]: I0714 21:52:50.019097 2267 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 21:52:50.019127 kubelet[2267]: I0714 21:52:50.019128 2267 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 14 21:52:50.019219 kubelet[2267]: I0714 21:52:50.019144 2267 kubelet.go:2321] "Starting kubelet main sync loop" Jul 14 21:52:50.019219 kubelet[2267]: E0714 21:52:50.019185 2267 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 21:52:50.019950 kubelet[2267]: W0714 21:52:50.019766 2267 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jul 14 21:52:50.019950 kubelet[2267]: E0714 21:52:50.019807 2267 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:52:50.025730 kubelet[2267]: I0714 21:52:50.025709 2267 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 14 21:52:50.025826 kubelet[2267]: I0714 21:52:50.025814 2267 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 14 21:52:50.025878 kubelet[2267]: I0714 21:52:50.025868 2267 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:52:50.099991 kubelet[2267]: I0714 21:52:50.099958 2267 policy_none.go:49] "None policy: Start" Jul 14 21:52:50.100595 kubelet[2267]: E0714 21:52:50.100558 2267 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:52:50.100958 kubelet[2267]: I0714 21:52:50.100929 2267 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 14 21:52:50.100958 kubelet[2267]: I0714 21:52:50.100959 2267 state_mem.go:35] "Initializing new in-memory state store" Jul 14 21:52:50.104890 kubelet[2267]: I0714 21:52:50.104857 2267 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 21:52:50.105698 kubelet[2267]: I0714 21:52:50.105054 2267 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 21:52:50.105698 kubelet[2267]: I0714 21:52:50.105072 2267 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 21:52:50.105698 kubelet[2267]: I0714 21:52:50.105626 2267 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 21:52:50.106693 kubelet[2267]: E0714 21:52:50.106672 2267 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 14 21:52:50.205714 kubelet[2267]: E0714 21:52:50.205596 2267 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="400ms" Jul 14 21:52:50.207030 kubelet[2267]: I0714 21:52:50.206988 2267 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 21:52:50.207633 kubelet[2267]: E0714 21:52:50.207593 2267 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Jul 14 21:52:50.301113 kubelet[2267]: I0714 21:52:50.300853 2267 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b8ea7f213f3541e9e38adcf7476a1ac9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b8ea7f213f3541e9e38adcf7476a1ac9\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:52:50.301113 kubelet[2267]: I0714 21:52:50.300893 2267 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b8ea7f213f3541e9e38adcf7476a1ac9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b8ea7f213f3541e9e38adcf7476a1ac9\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:52:50.301113 kubelet[2267]: I0714 21:52:50.300915 2267 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:52:50.301113 kubelet[2267]: I0714 21:52:50.300932 2267 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:52:50.301113 kubelet[2267]: I0714 21:52:50.300948 2267 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 14 21:52:50.301331 kubelet[2267]: I0714 21:52:50.300967 2267 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b8ea7f213f3541e9e38adcf7476a1ac9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b8ea7f213f3541e9e38adcf7476a1ac9\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:52:50.301331 kubelet[2267]: I0714 21:52:50.300981 2267 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:52:50.301331 kubelet[2267]: I0714 21:52:50.300995 2267 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:52:50.301331 kubelet[2267]: I0714 21:52:50.301009 2267 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:52:50.409495 kubelet[2267]: I0714 21:52:50.409457 2267 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 21:52:50.411265 kubelet[2267]: E0714 21:52:50.409766 2267 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Jul 14 21:52:50.424314 kubelet[2267]: E0714 21:52:50.424286 2267 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:50.425106 kubelet[2267]: E0714 21:52:50.425073 2267 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:50.425211 containerd[1545]: time="2025-07-14T21:52:50.425066920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b8ea7f213f3541e9e38adcf7476a1ac9,Namespace:kube-system,Attempt:0,}" Jul 14 21:52:50.425714 containerd[1545]: time="2025-07-14T21:52:50.425598193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 14 21:52:50.426826 kubelet[2267]: E0714 21:52:50.426801 2267 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:50.427326 containerd[1545]: time="2025-07-14T21:52:50.427275880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 14 21:52:50.606471 kubelet[2267]: E0714 21:52:50.606343 2267 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="800ms" Jul 14 21:52:50.811759 kubelet[2267]: I0714 21:52:50.811693 2267 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 21:52:50.812080 kubelet[2267]: E0714 21:52:50.812046 2267 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Jul 14 21:52:50.953140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2388171205.mount: Deactivated successfully. Jul 14 21:52:50.959333 containerd[1545]: time="2025-07-14T21:52:50.959256197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:52:50.961209 containerd[1545]: time="2025-07-14T21:52:50.961169469Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 21:52:50.961914 containerd[1545]: time="2025-07-14T21:52:50.961887005Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:52:50.963332 containerd[1545]: time="2025-07-14T21:52:50.963297718Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:52:50.964556 containerd[1545]: time="2025-07-14T21:52:50.964517574Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 14 21:52:50.965799 containerd[1545]: time="2025-07-14T21:52:50.965769420Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:52:50.967328 containerd[1545]: time="2025-07-14T21:52:50.967287441Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 21:52:50.968071 containerd[1545]: time="2025-07-14T21:52:50.968040361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:52:50.971037 containerd[1545]: time="2025-07-14T21:52:50.970740060Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 545.066147ms" Jul 14 21:52:50.971574 containerd[1545]: time="2025-07-14T21:52:50.971535712Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 545.532644ms" Jul 14 21:52:50.972813 containerd[1545]: time="2025-07-14T21:52:50.972781527Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 545.443546ms" Jul 14 21:52:51.043435 kubelet[2267]: W0714 21:52:51.043339 2267 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jul 14 21:52:51.043435 kubelet[2267]: E0714 21:52:51.043410 2267 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.64:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:52:51.152764 containerd[1545]: time="2025-07-14T21:52:51.152262566Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:52:51.152764 containerd[1545]: time="2025-07-14T21:52:51.152310818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:52:51.152764 containerd[1545]: time="2025-07-14T21:52:51.152325997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:52:51.152764 containerd[1545]: time="2025-07-14T21:52:51.152402491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:52:51.152764 containerd[1545]: time="2025-07-14T21:52:51.152153877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:52:51.152764 containerd[1545]: time="2025-07-14T21:52:51.152203648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:52:51.152764 containerd[1545]: time="2025-07-14T21:52:51.152218307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:52:51.152764 containerd[1545]: time="2025-07-14T21:52:51.152310139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:52:51.154102 containerd[1545]: time="2025-07-14T21:52:51.153824588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:52:51.154102 containerd[1545]: time="2025-07-14T21:52:51.153884585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:52:51.154102 containerd[1545]: time="2025-07-14T21:52:51.153899724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:52:51.154102 containerd[1545]: time="2025-07-14T21:52:51.153977256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:52:51.195920 containerd[1545]: time="2025-07-14T21:52:51.195851209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"04b3097479e79b01086f9658a4eab253d5da6b4d3ba37113b66f1719846c98ba\"" Jul 14 21:52:51.197290 kubelet[2267]: E0714 21:52:51.197261 2267 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:51.200354 containerd[1545]: time="2025-07-14T21:52:51.200267094Z" level=info msg="CreateContainer within sandbox \"04b3097479e79b01086f9658a4eab253d5da6b4d3ba37113b66f1719846c98ba\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 14 21:52:51.201709 containerd[1545]: time="2025-07-14T21:52:51.201563407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b8ea7f213f3541e9e38adcf7476a1ac9,Namespace:kube-system,Attempt:0,} returns sandbox id \"96d4d715dcb610fe5ecdcf10077c9494b640e8d99d74c4ea4c9cb75379c8e777\"" Jul 14 21:52:51.202103 containerd[1545]: time="2025-07-14T21:52:51.202065068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ec1e048c50085d78e5459d5547f1e96c3e110985242b2e0301ef5e08c220ea1\"" Jul 14 21:52:51.202607 kubelet[2267]: E0714 21:52:51.202479 2267 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:51.203413 kubelet[2267]: E0714 21:52:51.203228 2267 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:51.203907 kubelet[2267]: W0714 21:52:51.203855 2267 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jul 14 21:52:51.203952 kubelet[2267]: E0714 21:52:51.203917 2267 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.64:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:52:51.204723 containerd[1545]: time="2025-07-14T21:52:51.204680582Z" level=info msg="CreateContainer within sandbox \"96d4d715dcb610fe5ecdcf10077c9494b640e8d99d74c4ea4c9cb75379c8e777\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 14 21:52:51.205306 containerd[1545]: time="2025-07-14T21:52:51.205284141Z" level=info msg="CreateContainer within sandbox \"5ec1e048c50085d78e5459d5547f1e96c3e110985242b2e0301ef5e08c220ea1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 14 21:52:51.216407 containerd[1545]: time="2025-07-14T21:52:51.216208834Z" level=info msg="CreateContainer within sandbox \"04b3097479e79b01086f9658a4eab253d5da6b4d3ba37113b66f1719846c98ba\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"030bd0950124ebbae2da09558e361cfb94d3caa131fa8f648b7aa800fdef5016\"" Jul 14 21:52:51.216920 containerd[1545]: time="2025-07-14T21:52:51.216858248Z" level=info msg="StartContainer for \"030bd0950124ebbae2da09558e361cfb94d3caa131fa8f648b7aa800fdef5016\"" Jul 14 21:52:51.222401 containerd[1545]: time="2025-07-14T21:52:51.222331180Z" level=info msg="CreateContainer within sandbox \"96d4d715dcb610fe5ecdcf10077c9494b640e8d99d74c4ea4c9cb75379c8e777\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8f11854ac8f0f4fac5c5bea55def386659d4d48ea98576dfd600578216e3dc75\"" Jul 14 21:52:51.222924 containerd[1545]: time="2025-07-14T21:52:51.222804760Z" level=info msg="StartContainer for \"8f11854ac8f0f4fac5c5bea55def386659d4d48ea98576dfd600578216e3dc75\"" Jul 14 21:52:51.222980 kubelet[2267]: W0714 21:52:51.222843 2267 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jul 14 21:52:51.222980 kubelet[2267]: E0714 21:52:51.222904 2267 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.64:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:52:51.229525 containerd[1545]: time="2025-07-14T21:52:51.229480934Z" level=info msg="CreateContainer within sandbox \"5ec1e048c50085d78e5459d5547f1e96c3e110985242b2e0301ef5e08c220ea1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"94146d88efa0f444ab0a281b5df731893320f8589312ee1c1a5d9dc66cd3fabd\"" Jul 14 21:52:51.230104 containerd[1545]: time="2025-07-14T21:52:51.230077583Z" level=info msg="StartContainer for \"94146d88efa0f444ab0a281b5df731893320f8589312ee1c1a5d9dc66cd3fabd\"" Jul 14 21:52:51.283413 containerd[1545]: time="2025-07-14T21:52:51.283267124Z" level=info msg="StartContainer for \"94146d88efa0f444ab0a281b5df731893320f8589312ee1c1a5d9dc66cd3fabd\" returns successfully" Jul 14 21:52:51.310146 kubelet[2267]: W0714 21:52:51.310008 2267 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.64:6443: connect: connection refused Jul 14 21:52:51.310146 kubelet[2267]: E0714 21:52:51.310118 2267 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.64:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.64:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:52:51.313520 containerd[1545]: time="2025-07-14T21:52:51.313350752Z" level=info msg="StartContainer for \"8f11854ac8f0f4fac5c5bea55def386659d4d48ea98576dfd600578216e3dc75\" returns successfully" Jul 14 21:52:51.313520 containerd[1545]: time="2025-07-14T21:52:51.313474579Z" level=info msg="StartContainer for \"030bd0950124ebbae2da09558e361cfb94d3caa131fa8f648b7aa800fdef5016\" returns successfully" Jul 14 21:52:51.407574 kubelet[2267]: E0714 21:52:51.407506 2267 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.64:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.64:6443: connect: connection refused" interval="1.6s" Jul 14 21:52:51.614144 kubelet[2267]: I0714 21:52:51.613649 2267 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 21:52:51.614144 kubelet[2267]: E0714 21:52:51.613953 2267 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.64:6443/api/v1/nodes\": dial tcp 10.0.0.64:6443: connect: connection refused" node="localhost" Jul 14 21:52:52.030167 kubelet[2267]: E0714 21:52:52.030127 2267 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:52.032041 kubelet[2267]: E0714 21:52:52.031989 2267 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:52.037136 kubelet[2267]: E0714 21:52:52.037112 2267 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:53.014991 kubelet[2267]: E0714 21:52:53.014942 2267 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 14 21:52:53.037516 kubelet[2267]: E0714 21:52:53.037484 2267 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:53.215961 kubelet[2267]: I0714 21:52:53.215660 2267 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 21:52:53.227581 kubelet[2267]: I0714 21:52:53.227377 2267 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 14 21:52:53.227581 kubelet[2267]: E0714 21:52:53.227574 2267 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 14 21:52:53.234438 kubelet[2267]: E0714 21:52:53.234416 2267 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:52:53.335732 kubelet[2267]: E0714 21:52:53.335346 2267 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:52:53.435944 kubelet[2267]: E0714 21:52:53.435897 2267 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:52:53.536305 kubelet[2267]: E0714 21:52:53.536264 2267 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:52:53.636478 kubelet[2267]: E0714 21:52:53.636375 2267 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:52:53.737102 kubelet[2267]: E0714 21:52:53.737057 2267 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:52:53.837507 kubelet[2267]: E0714 21:52:53.837464 2267 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:52:53.938132 kubelet[2267]: E0714 21:52:53.938100 2267 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:52:54.038255 kubelet[2267]: E0714 21:52:54.038219 2267 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:52:54.041426 kubelet[2267]: E0714 21:52:54.041359 2267 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:54.994114 kubelet[2267]: I0714 21:52:54.994041 2267 apiserver.go:52] "Watching apiserver" Jul 14 21:52:54.999219 kubelet[2267]: I0714 21:52:54.999194 2267 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 14 21:52:55.244751 kubelet[2267]: E0714 21:52:55.244638 2267 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:55.277070 systemd[1]: Reloading requested from client PID 2544 ('systemctl') (unit session-7.scope)... Jul 14 21:52:55.277363 systemd[1]: Reloading... Jul 14 21:52:55.335057 zram_generator::config[2586]: No configuration found. Jul 14 21:52:55.419317 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:52:55.478254 systemd[1]: Reloading finished in 200 ms. Jul 14 21:52:55.503443 kubelet[2267]: I0714 21:52:55.503339 2267 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 21:52:55.503545 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:52:55.519683 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 21:52:55.520036 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:52:55.529565 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:52:55.631048 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:52:55.635449 (kubelet)[2635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 21:52:55.679516 kubelet[2635]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:52:55.681037 kubelet[2635]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 14 21:52:55.681037 kubelet[2635]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:52:55.681037 kubelet[2635]: I0714 21:52:55.679916 2635 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 21:52:55.690025 kubelet[2635]: I0714 21:52:55.689965 2635 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 14 21:52:55.690683 kubelet[2635]: I0714 21:52:55.690041 2635 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 21:52:55.691627 kubelet[2635]: I0714 21:52:55.691606 2635 server.go:934] "Client rotation is on, will bootstrap in background" Jul 14 21:52:55.695139 kubelet[2635]: I0714 21:52:55.695119 2635 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 14 21:52:55.699459 kubelet[2635]: I0714 21:52:55.699432 2635 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 21:52:55.703659 kubelet[2635]: E0714 21:52:55.703615 2635 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 21:52:55.703659 kubelet[2635]: I0714 21:52:55.703660 2635 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 21:52:55.706593 kubelet[2635]: I0714 21:52:55.706569 2635 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 21:52:55.706912 kubelet[2635]: I0714 21:52:55.706897 2635 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 14 21:52:55.707015 kubelet[2635]: I0714 21:52:55.706988 2635 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 21:52:55.707230 kubelet[2635]: I0714 21:52:55.707036 2635 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 14 21:52:55.707230 kubelet[2635]: I0714 21:52:55.707196 2635 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 21:52:55.707230 kubelet[2635]: I0714 21:52:55.707204 2635 container_manager_linux.go:300] "Creating device plugin manager" Jul 14 21:52:55.707519 kubelet[2635]: I0714 21:52:55.707242 2635 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:52:55.707519 kubelet[2635]: I0714 21:52:55.707330 2635 kubelet.go:408] "Attempting to sync node with API server" Jul 14 21:52:55.707519 kubelet[2635]: I0714 21:52:55.707340 2635 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 21:52:55.707519 kubelet[2635]: I0714 21:52:55.707356 2635 kubelet.go:314] "Adding apiserver pod source" Jul 14 21:52:55.707519 kubelet[2635]: I0714 21:52:55.707369 2635 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 21:52:55.708497 kubelet[2635]: I0714 21:52:55.707956 2635 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 14 21:52:55.708497 kubelet[2635]: I0714 21:52:55.708401 2635 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 21:52:55.710350 kubelet[2635]: I0714 21:52:55.710273 2635 server.go:1274] "Started kubelet" Jul 14 21:52:55.710626 kubelet[2635]: I0714 21:52:55.710530 2635 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 21:52:55.711156 kubelet[2635]: I0714 21:52:55.711009 2635 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 21:52:55.711780 kubelet[2635]: I0714 21:52:55.711748 2635 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 21:52:55.712443 kubelet[2635]: I0714 21:52:55.712423 2635 server.go:449] "Adding debug handlers to kubelet server" Jul 14 21:52:55.716698 kubelet[2635]: I0714 21:52:55.716660 2635 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 21:52:55.716829 kubelet[2635]: I0714 21:52:55.716813 2635 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 21:52:55.717386 kubelet[2635]: I0714 21:52:55.717359 2635 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 14 21:52:55.717567 kubelet[2635]: E0714 21:52:55.717549 2635 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:52:55.717693 kubelet[2635]: I0714 21:52:55.717680 2635 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 14 21:52:55.717884 kubelet[2635]: I0714 21:52:55.717870 2635 reconciler.go:26] "Reconciler: start to sync state" Jul 14 21:52:55.726024 kubelet[2635]: I0714 21:52:55.724207 2635 factory.go:221] Registration of the systemd container factory successfully Jul 14 21:52:55.726024 kubelet[2635]: I0714 21:52:55.724323 2635 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 21:52:55.739540 kubelet[2635]: I0714 21:52:55.739385 2635 factory.go:221] Registration of the containerd container factory successfully Jul 14 21:52:55.740971 kubelet[2635]: E0714 21:52:55.740949 2635 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 21:52:55.744396 kubelet[2635]: I0714 21:52:55.744316 2635 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 21:52:55.746981 kubelet[2635]: I0714 21:52:55.746956 2635 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 21:52:55.747100 kubelet[2635]: I0714 21:52:55.746995 2635 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 14 21:52:55.747100 kubelet[2635]: I0714 21:52:55.747046 2635 kubelet.go:2321] "Starting kubelet main sync loop" Jul 14 21:52:55.747100 kubelet[2635]: E0714 21:52:55.747090 2635 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 21:52:55.789929 kubelet[2635]: I0714 21:52:55.789839 2635 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 14 21:52:55.789929 kubelet[2635]: I0714 21:52:55.789860 2635 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 14 21:52:55.789929 kubelet[2635]: I0714 21:52:55.789880 2635 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:52:55.790095 kubelet[2635]: I0714 21:52:55.790028 2635 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 14 21:52:55.790095 kubelet[2635]: I0714 21:52:55.790041 2635 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 14 21:52:55.790095 kubelet[2635]: I0714 21:52:55.790059 2635 policy_none.go:49] "None policy: Start" Jul 14 21:52:55.791145 kubelet[2635]: I0714 21:52:55.791042 2635 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 14 21:52:55.791514 kubelet[2635]: I0714 21:52:55.791404 2635 state_mem.go:35] "Initializing new in-memory state store" Jul 14 21:52:55.791738 kubelet[2635]: I0714 21:52:55.791717 2635 state_mem.go:75] "Updated machine memory state" Jul 14 21:52:55.792957 kubelet[2635]: I0714 21:52:55.792938 2635 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 21:52:55.793270 kubelet[2635]: I0714 21:52:55.793242 2635 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 21:52:55.793375 kubelet[2635]: I0714 21:52:55.793333 2635 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 21:52:55.793975 kubelet[2635]: I0714 21:52:55.793958 2635 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 21:52:55.853522 kubelet[2635]: E0714 21:52:55.853485 2635 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 14 21:52:55.897960 kubelet[2635]: I0714 21:52:55.897937 2635 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 21:52:55.904231 kubelet[2635]: I0714 21:52:55.904197 2635 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 14 21:52:55.904312 kubelet[2635]: I0714 21:52:55.904268 2635 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 14 21:52:55.919627 kubelet[2635]: I0714 21:52:55.919583 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:52:55.919698 kubelet[2635]: I0714 21:52:55.919653 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:52:55.919698 kubelet[2635]: I0714 21:52:55.919682 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:52:55.919742 kubelet[2635]: I0714 21:52:55.919705 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 14 21:52:55.919787 kubelet[2635]: I0714 21:52:55.919749 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b8ea7f213f3541e9e38adcf7476a1ac9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b8ea7f213f3541e9e38adcf7476a1ac9\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:52:55.919787 kubelet[2635]: I0714 21:52:55.919766 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:52:55.919787 kubelet[2635]: I0714 21:52:55.919781 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:52:55.919850 kubelet[2635]: I0714 21:52:55.919817 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b8ea7f213f3541e9e38adcf7476a1ac9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b8ea7f213f3541e9e38adcf7476a1ac9\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:52:55.919850 kubelet[2635]: I0714 21:52:55.919835 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b8ea7f213f3541e9e38adcf7476a1ac9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b8ea7f213f3541e9e38adcf7476a1ac9\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:52:56.153195 kubelet[2635]: E0714 21:52:56.153162 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:56.153634 kubelet[2635]: E0714 21:52:56.153595 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:56.153811 kubelet[2635]: E0714 21:52:56.153784 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:56.708062 kubelet[2635]: I0714 21:52:56.708006 2635 apiserver.go:52] "Watching apiserver" Jul 14 21:52:56.718022 kubelet[2635]: I0714 21:52:56.717981 2635 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 14 21:52:56.768846 kubelet[2635]: E0714 21:52:56.768789 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:56.769431 kubelet[2635]: E0714 21:52:56.769413 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:56.774522 kubelet[2635]: E0714 21:52:56.774494 2635 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 14 21:52:56.774656 kubelet[2635]: E0714 21:52:56.774641 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:56.792320 kubelet[2635]: I0714 21:52:56.792267 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.792250014 podStartE2EDuration="1.792250014s" podCreationTimestamp="2025-07-14 21:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:52:56.78566348 +0000 UTC m=+1.147188676" watchObservedRunningTime="2025-07-14 21:52:56.792250014 +0000 UTC m=+1.153775210" Jul 14 21:52:56.799484 kubelet[2635]: I0714 21:52:56.799411 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.79939463 podStartE2EDuration="1.79939463s" podCreationTimestamp="2025-07-14 21:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:52:56.792231987 +0000 UTC m=+1.153757143" watchObservedRunningTime="2025-07-14 21:52:56.79939463 +0000 UTC m=+1.160919826" Jul 14 21:52:56.799484 kubelet[2635]: I0714 21:52:56.799485 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.799481288 podStartE2EDuration="1.799481288s" podCreationTimestamp="2025-07-14 21:52:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:52:56.799173708 +0000 UTC m=+1.160698904" watchObservedRunningTime="2025-07-14 21:52:56.799481288 +0000 UTC m=+1.161006444" Jul 14 21:52:57.771325 kubelet[2635]: E0714 21:52:57.771247 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:52:59.256144 kubelet[2635]: E0714 21:52:59.256103 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:00.300970 kubelet[2635]: I0714 21:53:00.300929 2635 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 14 21:53:00.301349 containerd[1545]: time="2025-07-14T21:53:00.301306251Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 14 21:53:00.301688 kubelet[2635]: I0714 21:53:00.301470 2635 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 14 21:53:00.354859 kubelet[2635]: I0714 21:53:00.354808 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c008734-9af8-4f34-8d9d-689b7d24bcb8-xtables-lock\") pod \"kube-proxy-qb5wc\" (UID: \"4c008734-9af8-4f34-8d9d-689b7d24bcb8\") " pod="kube-system/kube-proxy-qb5wc" Jul 14 21:53:00.354859 kubelet[2635]: I0714 21:53:00.354852 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz9rk\" (UniqueName: \"kubernetes.io/projected/4c008734-9af8-4f34-8d9d-689b7d24bcb8-kube-api-access-kz9rk\") pod \"kube-proxy-qb5wc\" (UID: \"4c008734-9af8-4f34-8d9d-689b7d24bcb8\") " pod="kube-system/kube-proxy-qb5wc" Jul 14 21:53:00.355040 kubelet[2635]: I0714 21:53:00.354878 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4c008734-9af8-4f34-8d9d-689b7d24bcb8-kube-proxy\") pod \"kube-proxy-qb5wc\" (UID: \"4c008734-9af8-4f34-8d9d-689b7d24bcb8\") " pod="kube-system/kube-proxy-qb5wc" Jul 14 21:53:00.355040 kubelet[2635]: I0714 21:53:00.354895 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c008734-9af8-4f34-8d9d-689b7d24bcb8-lib-modules\") pod \"kube-proxy-qb5wc\" (UID: \"4c008734-9af8-4f34-8d9d-689b7d24bcb8\") " pod="kube-system/kube-proxy-qb5wc" Jul 14 21:53:00.463141 kubelet[2635]: E0714 21:53:00.463098 2635 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 14 21:53:00.463141 kubelet[2635]: E0714 21:53:00.463134 2635 projected.go:194] Error preparing data for projected volume kube-api-access-kz9rk for pod kube-system/kube-proxy-qb5wc: configmap "kube-root-ca.crt" not found Jul 14 21:53:00.463292 kubelet[2635]: E0714 21:53:00.463190 2635 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4c008734-9af8-4f34-8d9d-689b7d24bcb8-kube-api-access-kz9rk podName:4c008734-9af8-4f34-8d9d-689b7d24bcb8 nodeName:}" failed. No retries permitted until 2025-07-14 21:53:00.963169863 +0000 UTC m=+5.324695059 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kz9rk" (UniqueName: "kubernetes.io/projected/4c008734-9af8-4f34-8d9d-689b7d24bcb8-kube-api-access-kz9rk") pod "kube-proxy-qb5wc" (UID: "4c008734-9af8-4f34-8d9d-689b7d24bcb8") : configmap "kube-root-ca.crt" not found Jul 14 21:53:01.194740 kubelet[2635]: E0714 21:53:01.194693 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:01.195637 containerd[1545]: time="2025-07-14T21:53:01.195278589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qb5wc,Uid:4c008734-9af8-4f34-8d9d-689b7d24bcb8,Namespace:kube-system,Attempt:0,}" Jul 14 21:53:01.214775 containerd[1545]: time="2025-07-14T21:53:01.214572187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:53:01.214775 containerd[1545]: time="2025-07-14T21:53:01.214619276Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:53:01.214775 containerd[1545]: time="2025-07-14T21:53:01.214630398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:01.214775 containerd[1545]: time="2025-07-14T21:53:01.214717454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:01.244810 containerd[1545]: time="2025-07-14T21:53:01.244764777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qb5wc,Uid:4c008734-9af8-4f34-8d9d-689b7d24bcb8,Namespace:kube-system,Attempt:0,} returns sandbox id \"a47546f2f935abfab0dc0e7d02585dd582ac387822bf2ec5e53e6bdb9febf341\"" Jul 14 21:53:01.245491 kubelet[2635]: E0714 21:53:01.245469 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:01.247736 containerd[1545]: time="2025-07-14T21:53:01.247641914Z" level=info msg="CreateContainer within sandbox \"a47546f2f935abfab0dc0e7d02585dd582ac387822bf2ec5e53e6bdb9febf341\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 14 21:53:01.260839 containerd[1545]: time="2025-07-14T21:53:01.260806809Z" level=info msg="CreateContainer within sandbox \"a47546f2f935abfab0dc0e7d02585dd582ac387822bf2ec5e53e6bdb9febf341\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3f11fb8a5f9fe4803f6865c941a86180539696608d57f133d2c141da8f8c8055\"" Jul 14 21:53:01.261505 containerd[1545]: time="2025-07-14T21:53:01.261438647Z" level=info msg="StartContainer for \"3f11fb8a5f9fe4803f6865c941a86180539696608d57f133d2c141da8f8c8055\"" Jul 14 21:53:01.317476 containerd[1545]: time="2025-07-14T21:53:01.316669466Z" level=info msg="StartContainer for \"3f11fb8a5f9fe4803f6865c941a86180539696608d57f133d2c141da8f8c8055\" returns successfully" Jul 14 21:53:01.462849 kubelet[2635]: I0714 21:53:01.462714 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w74ct\" (UniqueName: \"kubernetes.io/projected/de0ec757-ac51-490a-a6d5-6c2f78f83600-kube-api-access-w74ct\") pod \"tigera-operator-5bf8dfcb4-98pfh\" (UID: \"de0ec757-ac51-490a-a6d5-6c2f78f83600\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-98pfh" Jul 14 21:53:01.462849 kubelet[2635]: I0714 21:53:01.462770 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/de0ec757-ac51-490a-a6d5-6c2f78f83600-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-98pfh\" (UID: \"de0ec757-ac51-490a-a6d5-6c2f78f83600\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-98pfh" Jul 14 21:53:01.711289 containerd[1545]: time="2025-07-14T21:53:01.711234486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-98pfh,Uid:de0ec757-ac51-490a-a6d5-6c2f78f83600,Namespace:tigera-operator,Attempt:0,}" Jul 14 21:53:01.729486 containerd[1545]: time="2025-07-14T21:53:01.729278331Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:53:01.729900 containerd[1545]: time="2025-07-14T21:53:01.729861920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:53:01.729900 containerd[1545]: time="2025-07-14T21:53:01.729886204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:01.730044 containerd[1545]: time="2025-07-14T21:53:01.729973941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:01.774688 containerd[1545]: time="2025-07-14T21:53:01.774650912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-98pfh,Uid:de0ec757-ac51-490a-a6d5-6c2f78f83600,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8dc44be4d80c12848334ce0b2049a0f0098dc9530e943ff2031e33055bcad69d\"" Jul 14 21:53:01.776661 containerd[1545]: time="2025-07-14T21:53:01.776584713Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 14 21:53:01.778834 kubelet[2635]: E0714 21:53:01.778702 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:02.173748 kubelet[2635]: E0714 21:53:02.173710 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:02.192337 kubelet[2635]: I0714 21:53:02.192248 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qb5wc" podStartSLOduration=2.192229611 podStartE2EDuration="2.192229611s" podCreationTimestamp="2025-07-14 21:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:53:01.789446231 +0000 UTC m=+6.150971467" watchObservedRunningTime="2025-07-14 21:53:02.192229611 +0000 UTC m=+6.553754807" Jul 14 21:53:02.781399 kubelet[2635]: E0714 21:53:02.781353 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:02.931948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2169140637.mount: Deactivated successfully. Jul 14 21:53:03.350135 containerd[1545]: time="2025-07-14T21:53:03.350082664Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:03.350996 containerd[1545]: time="2025-07-14T21:53:03.350832829Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 14 21:53:03.352035 containerd[1545]: time="2025-07-14T21:53:03.351949575Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:03.354739 containerd[1545]: time="2025-07-14T21:53:03.354663828Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:03.356224 containerd[1545]: time="2025-07-14T21:53:03.355776814Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.579150014s" Jul 14 21:53:03.356224 containerd[1545]: time="2025-07-14T21:53:03.355816141Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 14 21:53:03.358919 containerd[1545]: time="2025-07-14T21:53:03.358673017Z" level=info msg="CreateContainer within sandbox \"8dc44be4d80c12848334ce0b2049a0f0098dc9530e943ff2031e33055bcad69d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 14 21:53:03.368175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1849022949.mount: Deactivated successfully. Jul 14 21:53:03.369378 containerd[1545]: time="2025-07-14T21:53:03.369340798Z" level=info msg="CreateContainer within sandbox \"8dc44be4d80c12848334ce0b2049a0f0098dc9530e943ff2031e33055bcad69d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"69719de71c26fd46e0757e0a9a210291d6edfa9325b9abc3eb08cf8301de1b2a\"" Jul 14 21:53:03.369819 containerd[1545]: time="2025-07-14T21:53:03.369793633Z" level=info msg="StartContainer for \"69719de71c26fd46e0757e0a9a210291d6edfa9325b9abc3eb08cf8301de1b2a\"" Jul 14 21:53:03.417042 containerd[1545]: time="2025-07-14T21:53:03.416986389Z" level=info msg="StartContainer for \"69719de71c26fd46e0757e0a9a210291d6edfa9325b9abc3eb08cf8301de1b2a\" returns successfully" Jul 14 21:53:03.727073 kubelet[2635]: E0714 21:53:03.726984 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:03.783985 kubelet[2635]: E0714 21:53:03.783839 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:03.806124 kubelet[2635]: I0714 21:53:03.806066 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-98pfh" podStartSLOduration=1.225222369 podStartE2EDuration="2.806048595s" podCreationTimestamp="2025-07-14 21:53:01 +0000 UTC" firstStartedPulling="2025-07-14 21:53:01.775976799 +0000 UTC m=+6.137501995" lastFinishedPulling="2025-07-14 21:53:03.356803025 +0000 UTC m=+7.718328221" observedRunningTime="2025-07-14 21:53:03.80596282 +0000 UTC m=+8.167488016" watchObservedRunningTime="2025-07-14 21:53:03.806048595 +0000 UTC m=+8.167573791" Jul 14 21:53:08.915584 sudo[1748]: pam_unix(sudo:session): session closed for user root Jul 14 21:53:08.920901 sshd[1741]: pam_unix(sshd:session): session closed for user core Jul 14 21:53:08.928706 systemd[1]: sshd@6-10.0.0.64:22-10.0.0.1:42162.service: Deactivated successfully. Jul 14 21:53:08.936834 systemd[1]: session-7.scope: Deactivated successfully. Jul 14 21:53:08.937237 systemd-logind[1524]: Session 7 logged out. Waiting for processes to exit. Jul 14 21:53:08.944357 systemd-logind[1524]: Removed session 7. Jul 14 21:53:09.274238 kubelet[2635]: E0714 21:53:09.273711 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:11.127101 update_engine[1527]: I20250714 21:53:11.117127 1527 update_attempter.cc:509] Updating boot flags... Jul 14 21:53:11.174041 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3046) Jul 14 21:53:11.221074 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3046) Jul 14 21:53:13.846008 kubelet[2635]: I0714 21:53:13.845964 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a10d5d8f-bde3-4b5e-960e-bd0480d14ae2-tigera-ca-bundle\") pod \"calico-typha-7fff8cd749-m77cz\" (UID: \"a10d5d8f-bde3-4b5e-960e-bd0480d14ae2\") " pod="calico-system/calico-typha-7fff8cd749-m77cz" Jul 14 21:53:13.846536 kubelet[2635]: I0714 21:53:13.846444 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k5rz\" (UniqueName: \"kubernetes.io/projected/a10d5d8f-bde3-4b5e-960e-bd0480d14ae2-kube-api-access-2k5rz\") pod \"calico-typha-7fff8cd749-m77cz\" (UID: \"a10d5d8f-bde3-4b5e-960e-bd0480d14ae2\") " pod="calico-system/calico-typha-7fff8cd749-m77cz" Jul 14 21:53:13.846536 kubelet[2635]: I0714 21:53:13.846482 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a10d5d8f-bde3-4b5e-960e-bd0480d14ae2-typha-certs\") pod \"calico-typha-7fff8cd749-m77cz\" (UID: \"a10d5d8f-bde3-4b5e-960e-bd0480d14ae2\") " pod="calico-system/calico-typha-7fff8cd749-m77cz" Jul 14 21:53:14.069714 kubelet[2635]: E0714 21:53:14.069681 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:14.071699 containerd[1545]: time="2025-07-14T21:53:14.071646760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7fff8cd749-m77cz,Uid:a10d5d8f-bde3-4b5e-960e-bd0480d14ae2,Namespace:calico-system,Attempt:0,}" Jul 14 21:53:14.102921 containerd[1545]: time="2025-07-14T21:53:14.102237492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:53:14.102921 containerd[1545]: time="2025-07-14T21:53:14.102294658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:53:14.102921 containerd[1545]: time="2025-07-14T21:53:14.102310059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:14.102921 containerd[1545]: time="2025-07-14T21:53:14.102404548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:14.147958 kubelet[2635]: I0714 21:53:14.147903 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ff0bdf70-5501-4d8d-9f9d-88089778f258-policysync\") pod \"calico-node-8ntct\" (UID: \"ff0bdf70-5501-4d8d-9f9d-88089778f258\") " pod="calico-system/calico-node-8ntct" Jul 14 21:53:14.147958 kubelet[2635]: I0714 21:53:14.147943 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ff0bdf70-5501-4d8d-9f9d-88089778f258-var-run-calico\") pod \"calico-node-8ntct\" (UID: \"ff0bdf70-5501-4d8d-9f9d-88089778f258\") " pod="calico-system/calico-node-8ntct" Jul 14 21:53:14.147958 kubelet[2635]: I0714 21:53:14.147963 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ff0bdf70-5501-4d8d-9f9d-88089778f258-flexvol-driver-host\") pod \"calico-node-8ntct\" (UID: \"ff0bdf70-5501-4d8d-9f9d-88089778f258\") " pod="calico-system/calico-node-8ntct" Jul 14 21:53:14.148208 kubelet[2635]: I0714 21:53:14.147986 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ff0bdf70-5501-4d8d-9f9d-88089778f258-cni-log-dir\") pod \"calico-node-8ntct\" (UID: \"ff0bdf70-5501-4d8d-9f9d-88089778f258\") " pod="calico-system/calico-node-8ntct" Jul 14 21:53:14.148208 kubelet[2635]: I0714 21:53:14.148002 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52jtv\" (UniqueName: \"kubernetes.io/projected/ff0bdf70-5501-4d8d-9f9d-88089778f258-kube-api-access-52jtv\") pod \"calico-node-8ntct\" (UID: \"ff0bdf70-5501-4d8d-9f9d-88089778f258\") " pod="calico-system/calico-node-8ntct" Jul 14 21:53:14.148208 kubelet[2635]: I0714 21:53:14.148028 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff0bdf70-5501-4d8d-9f9d-88089778f258-lib-modules\") pod \"calico-node-8ntct\" (UID: \"ff0bdf70-5501-4d8d-9f9d-88089778f258\") " pod="calico-system/calico-node-8ntct" Jul 14 21:53:14.148208 kubelet[2635]: I0714 21:53:14.148044 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff0bdf70-5501-4d8d-9f9d-88089778f258-xtables-lock\") pod \"calico-node-8ntct\" (UID: \"ff0bdf70-5501-4d8d-9f9d-88089778f258\") " pod="calico-system/calico-node-8ntct" Jul 14 21:53:14.148208 kubelet[2635]: I0714 21:53:14.148060 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ff0bdf70-5501-4d8d-9f9d-88089778f258-node-certs\") pod \"calico-node-8ntct\" (UID: \"ff0bdf70-5501-4d8d-9f9d-88089778f258\") " pod="calico-system/calico-node-8ntct" Jul 14 21:53:14.148323 kubelet[2635]: I0714 21:53:14.148074 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ff0bdf70-5501-4d8d-9f9d-88089778f258-cni-net-dir\") pod \"calico-node-8ntct\" (UID: \"ff0bdf70-5501-4d8d-9f9d-88089778f258\") " pod="calico-system/calico-node-8ntct" Jul 14 21:53:14.148323 kubelet[2635]: I0714 21:53:14.148092 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ff0bdf70-5501-4d8d-9f9d-88089778f258-var-lib-calico\") pod \"calico-node-8ntct\" (UID: \"ff0bdf70-5501-4d8d-9f9d-88089778f258\") " pod="calico-system/calico-node-8ntct" Jul 14 21:53:14.148323 kubelet[2635]: I0714 21:53:14.148106 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ff0bdf70-5501-4d8d-9f9d-88089778f258-cni-bin-dir\") pod \"calico-node-8ntct\" (UID: \"ff0bdf70-5501-4d8d-9f9d-88089778f258\") " pod="calico-system/calico-node-8ntct" Jul 14 21:53:14.148323 kubelet[2635]: I0714 21:53:14.148120 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff0bdf70-5501-4d8d-9f9d-88089778f258-tigera-ca-bundle\") pod \"calico-node-8ntct\" (UID: \"ff0bdf70-5501-4d8d-9f9d-88089778f258\") " pod="calico-system/calico-node-8ntct" Jul 14 21:53:14.171400 containerd[1545]: time="2025-07-14T21:53:14.171278540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7fff8cd749-m77cz,Uid:a10d5d8f-bde3-4b5e-960e-bd0480d14ae2,Namespace:calico-system,Attempt:0,} returns sandbox id \"831bd28b58b20128581f42334410c475afdc5eb9724b43a0237a4dbddb868256\"" Jul 14 21:53:14.172211 kubelet[2635]: E0714 21:53:14.172188 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:14.173386 containerd[1545]: time="2025-07-14T21:53:14.173361017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 14 21:53:14.253822 kubelet[2635]: E0714 21:53:14.253795 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.254071 kubelet[2635]: W0714 21:53:14.253952 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.254071 kubelet[2635]: E0714 21:53:14.254003 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.254240 kubelet[2635]: E0714 21:53:14.254220 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.254240 kubelet[2635]: W0714 21:53:14.254238 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.254299 kubelet[2635]: E0714 21:53:14.254252 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.259978 kubelet[2635]: E0714 21:53:14.259951 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.259978 kubelet[2635]: W0714 21:53:14.259974 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.259978 kubelet[2635]: E0714 21:53:14.259989 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.346547 kubelet[2635]: E0714 21:53:14.346382 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rxt6z" podUID="00f8cdae-e32e-4020-9c5f-9b5051044975" Jul 14 21:53:14.348186 kubelet[2635]: E0714 21:53:14.348056 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.348186 kubelet[2635]: W0714 21:53:14.348076 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.348186 kubelet[2635]: E0714 21:53:14.348092 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.348942 kubelet[2635]: E0714 21:53:14.348568 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.349116 kubelet[2635]: W0714 21:53:14.349090 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.349160 kubelet[2635]: E0714 21:53:14.349117 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.349334 kubelet[2635]: E0714 21:53:14.349321 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.349334 kubelet[2635]: W0714 21:53:14.349334 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.349401 kubelet[2635]: E0714 21:53:14.349344 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.350979 kubelet[2635]: E0714 21:53:14.350472 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.350979 kubelet[2635]: W0714 21:53:14.350487 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.350979 kubelet[2635]: E0714 21:53:14.350508 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.350979 kubelet[2635]: E0714 21:53:14.350825 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.350979 kubelet[2635]: W0714 21:53:14.350836 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.350979 kubelet[2635]: E0714 21:53:14.350846 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.352536 kubelet[2635]: E0714 21:53:14.352500 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.352536 kubelet[2635]: W0714 21:53:14.352515 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.352536 kubelet[2635]: E0714 21:53:14.352535 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.353442 kubelet[2635]: E0714 21:53:14.353285 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.353442 kubelet[2635]: W0714 21:53:14.353301 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.353442 kubelet[2635]: E0714 21:53:14.353330 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.355892 kubelet[2635]: E0714 21:53:14.355873 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.355892 kubelet[2635]: W0714 21:53:14.355889 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.355991 kubelet[2635]: E0714 21:53:14.355902 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.356429 kubelet[2635]: E0714 21:53:14.356414 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.356429 kubelet[2635]: W0714 21:53:14.356429 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.356506 kubelet[2635]: E0714 21:53:14.356440 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.357759 kubelet[2635]: E0714 21:53:14.356677 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.357759 kubelet[2635]: W0714 21:53:14.356691 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.357759 kubelet[2635]: E0714 21:53:14.356702 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.358259 kubelet[2635]: E0714 21:53:14.357989 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.358714 kubelet[2635]: W0714 21:53:14.358010 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.358773 kubelet[2635]: E0714 21:53:14.358717 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.359483 kubelet[2635]: E0714 21:53:14.359445 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.359483 kubelet[2635]: W0714 21:53:14.359478 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.359483 kubelet[2635]: E0714 21:53:14.359490 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.360000 kubelet[2635]: E0714 21:53:14.359742 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.360000 kubelet[2635]: W0714 21:53:14.359755 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.360000 kubelet[2635]: E0714 21:53:14.359764 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.360600 kubelet[2635]: E0714 21:53:14.360034 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.360600 kubelet[2635]: W0714 21:53:14.360044 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.360600 kubelet[2635]: E0714 21:53:14.360077 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.360600 kubelet[2635]: E0714 21:53:14.360314 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.360600 kubelet[2635]: W0714 21:53:14.360324 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.360600 kubelet[2635]: E0714 21:53:14.360389 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.360772 kubelet[2635]: E0714 21:53:14.360691 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.360772 kubelet[2635]: W0714 21:53:14.360701 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.360772 kubelet[2635]: E0714 21:53:14.360709 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.361545 kubelet[2635]: E0714 21:53:14.360879 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.361545 kubelet[2635]: W0714 21:53:14.360891 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.361545 kubelet[2635]: E0714 21:53:14.360900 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.361545 kubelet[2635]: E0714 21:53:14.361084 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.361545 kubelet[2635]: W0714 21:53:14.361097 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.361545 kubelet[2635]: E0714 21:53:14.361111 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.361545 kubelet[2635]: E0714 21:53:14.361324 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.361545 kubelet[2635]: W0714 21:53:14.361332 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.361545 kubelet[2635]: E0714 21:53:14.361340 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.361545 kubelet[2635]: E0714 21:53:14.361485 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.362802 kubelet[2635]: W0714 21:53:14.361491 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.362802 kubelet[2635]: E0714 21:53:14.361499 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.362802 kubelet[2635]: E0714 21:53:14.361729 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.362802 kubelet[2635]: W0714 21:53:14.361737 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.362802 kubelet[2635]: E0714 21:53:14.361745 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.362802 kubelet[2635]: I0714 21:53:14.361770 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/00f8cdae-e32e-4020-9c5f-9b5051044975-registration-dir\") pod \"csi-node-driver-rxt6z\" (UID: \"00f8cdae-e32e-4020-9c5f-9b5051044975\") " pod="calico-system/csi-node-driver-rxt6z" Jul 14 21:53:14.362802 kubelet[2635]: E0714 21:53:14.361997 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.362802 kubelet[2635]: W0714 21:53:14.362008 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.362802 kubelet[2635]: E0714 21:53:14.362032 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.363200 kubelet[2635]: I0714 21:53:14.362053 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/00f8cdae-e32e-4020-9c5f-9b5051044975-socket-dir\") pod \"csi-node-driver-rxt6z\" (UID: \"00f8cdae-e32e-4020-9c5f-9b5051044975\") " pod="calico-system/csi-node-driver-rxt6z" Jul 14 21:53:14.363200 kubelet[2635]: E0714 21:53:14.362221 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.363200 kubelet[2635]: W0714 21:53:14.362229 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.363200 kubelet[2635]: E0714 21:53:14.362243 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.363200 kubelet[2635]: I0714 21:53:14.362259 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnlg8\" (UniqueName: \"kubernetes.io/projected/00f8cdae-e32e-4020-9c5f-9b5051044975-kube-api-access-rnlg8\") pod \"csi-node-driver-rxt6z\" (UID: \"00f8cdae-e32e-4020-9c5f-9b5051044975\") " pod="calico-system/csi-node-driver-rxt6z" Jul 14 21:53:14.363200 kubelet[2635]: E0714 21:53:14.362437 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.363200 kubelet[2635]: W0714 21:53:14.362449 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.363200 kubelet[2635]: E0714 21:53:14.362468 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.363200 kubelet[2635]: E0714 21:53:14.362615 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.363456 kubelet[2635]: W0714 21:53:14.362622 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.363456 kubelet[2635]: E0714 21:53:14.362630 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.363456 kubelet[2635]: E0714 21:53:14.362804 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.363456 kubelet[2635]: W0714 21:53:14.362817 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.363456 kubelet[2635]: E0714 21:53:14.362828 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.363456 kubelet[2635]: E0714 21:53:14.363068 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.363456 kubelet[2635]: W0714 21:53:14.363081 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.363456 kubelet[2635]: E0714 21:53:14.363111 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.363887 kubelet[2635]: E0714 21:53:14.363841 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.363887 kubelet[2635]: W0714 21:53:14.363853 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.363887 kubelet[2635]: E0714 21:53:14.363870 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.363887 kubelet[2635]: I0714 21:53:14.363889 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00f8cdae-e32e-4020-9c5f-9b5051044975-kubelet-dir\") pod \"csi-node-driver-rxt6z\" (UID: \"00f8cdae-e32e-4020-9c5f-9b5051044975\") " pod="calico-system/csi-node-driver-rxt6z" Jul 14 21:53:14.364118 kubelet[2635]: E0714 21:53:14.364105 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.364211 kubelet[2635]: W0714 21:53:14.364118 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.364211 kubelet[2635]: E0714 21:53:14.364180 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.364266 kubelet[2635]: I0714 21:53:14.364211 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/00f8cdae-e32e-4020-9c5f-9b5051044975-varrun\") pod \"csi-node-driver-rxt6z\" (UID: \"00f8cdae-e32e-4020-9c5f-9b5051044975\") " pod="calico-system/csi-node-driver-rxt6z" Jul 14 21:53:14.364289 kubelet[2635]: E0714 21:53:14.364273 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.364289 kubelet[2635]: W0714 21:53:14.364282 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.364335 kubelet[2635]: E0714 21:53:14.364324 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.364479 kubelet[2635]: E0714 21:53:14.364464 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.364479 kubelet[2635]: W0714 21:53:14.364476 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.364540 kubelet[2635]: E0714 21:53:14.364491 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.364650 kubelet[2635]: E0714 21:53:14.364637 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.364650 kubelet[2635]: W0714 21:53:14.364649 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.364720 kubelet[2635]: E0714 21:53:14.364671 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.364873 kubelet[2635]: E0714 21:53:14.364860 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.364873 kubelet[2635]: W0714 21:53:14.364872 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.364933 kubelet[2635]: E0714 21:53:14.364882 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.365115 kubelet[2635]: E0714 21:53:14.365102 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.365115 kubelet[2635]: W0714 21:53:14.365113 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.365185 kubelet[2635]: E0714 21:53:14.365121 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.365281 kubelet[2635]: E0714 21:53:14.365268 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.365281 kubelet[2635]: W0714 21:53:14.365279 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.365336 kubelet[2635]: E0714 21:53:14.365286 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.429813 containerd[1545]: time="2025-07-14T21:53:14.429768259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8ntct,Uid:ff0bdf70-5501-4d8d-9f9d-88089778f258,Namespace:calico-system,Attempt:0,}" Jul 14 21:53:14.467224 kubelet[2635]: E0714 21:53:14.467190 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.467224 kubelet[2635]: W0714 21:53:14.467214 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.468149 kubelet[2635]: E0714 21:53:14.467236 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.469893 kubelet[2635]: E0714 21:53:14.469692 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.469893 kubelet[2635]: W0714 21:53:14.469715 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.469893 kubelet[2635]: E0714 21:53:14.469746 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.472953 kubelet[2635]: E0714 21:53:14.472312 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.472953 kubelet[2635]: W0714 21:53:14.472332 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.472953 kubelet[2635]: E0714 21:53:14.472695 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.472953 kubelet[2635]: W0714 21:53:14.472704 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.476046 kubelet[2635]: E0714 21:53:14.472694 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.476046 kubelet[2635]: E0714 21:53:14.472781 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.476193 kubelet[2635]: E0714 21:53:14.476090 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.476193 kubelet[2635]: W0714 21:53:14.476104 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.476193 kubelet[2635]: E0714 21:53:14.476125 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.478689 containerd[1545]: time="2025-07-14T21:53:14.476846790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:53:14.478689 containerd[1545]: time="2025-07-14T21:53:14.476916036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:53:14.478689 containerd[1545]: time="2025-07-14T21:53:14.476930958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:14.480138 kubelet[2635]: E0714 21:53:14.478761 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.480138 kubelet[2635]: W0714 21:53:14.478777 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.480138 kubelet[2635]: E0714 21:53:14.478850 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.480982 kubelet[2635]: E0714 21:53:14.480149 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.480982 kubelet[2635]: W0714 21:53:14.480164 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.480982 kubelet[2635]: E0714 21:53:14.480217 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.481083 containerd[1545]: time="2025-07-14T21:53:14.479164809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:14.484046 kubelet[2635]: E0714 21:53:14.482241 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.484046 kubelet[2635]: W0714 21:53:14.482261 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.484046 kubelet[2635]: E0714 21:53:14.482307 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.486115 kubelet[2635]: E0714 21:53:14.484429 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.486115 kubelet[2635]: W0714 21:53:14.484446 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.486115 kubelet[2635]: E0714 21:53:14.484511 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.486115 kubelet[2635]: E0714 21:53:14.484650 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.486115 kubelet[2635]: W0714 21:53:14.484660 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.486115 kubelet[2635]: E0714 21:53:14.486060 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.490063 kubelet[2635]: E0714 21:53:14.487137 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.490063 kubelet[2635]: W0714 21:53:14.487156 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.490063 kubelet[2635]: E0714 21:53:14.487232 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.490063 kubelet[2635]: E0714 21:53:14.487410 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.490063 kubelet[2635]: W0714 21:53:14.487424 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.490063 kubelet[2635]: E0714 21:53:14.487474 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.490063 kubelet[2635]: E0714 21:53:14.487597 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.490063 kubelet[2635]: W0714 21:53:14.487607 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.490063 kubelet[2635]: E0714 21:53:14.487782 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.490063 kubelet[2635]: W0714 21:53:14.487793 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.490359 kubelet[2635]: E0714 21:53:14.489743 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.490359 kubelet[2635]: E0714 21:53:14.489775 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.490359 kubelet[2635]: E0714 21:53:14.489907 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.490359 kubelet[2635]: W0714 21:53:14.489922 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.490359 kubelet[2635]: E0714 21:53:14.489988 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.490359 kubelet[2635]: E0714 21:53:14.490328 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.490359 kubelet[2635]: W0714 21:53:14.490341 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.490500 kubelet[2635]: E0714 21:53:14.490394 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.491134 kubelet[2635]: E0714 21:53:14.491113 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.491134 kubelet[2635]: W0714 21:53:14.491130 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.491211 kubelet[2635]: E0714 21:53:14.491192 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.491910 kubelet[2635]: E0714 21:53:14.491884 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.491910 kubelet[2635]: W0714 21:53:14.491905 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.496036 kubelet[2635]: E0714 21:53:14.493346 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.496259 kubelet[2635]: E0714 21:53:14.496227 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.496259 kubelet[2635]: W0714 21:53:14.496249 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.496320 kubelet[2635]: E0714 21:53:14.496286 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.499153 kubelet[2635]: E0714 21:53:14.499115 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.499153 kubelet[2635]: W0714 21:53:14.499144 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.501169 kubelet[2635]: E0714 21:53:14.499305 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.501169 kubelet[2635]: E0714 21:53:14.499497 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.501169 kubelet[2635]: W0714 21:53:14.499513 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.501169 kubelet[2635]: E0714 21:53:14.499595 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.502095 kubelet[2635]: E0714 21:53:14.502076 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.502095 kubelet[2635]: W0714 21:53:14.502094 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.502249 kubelet[2635]: E0714 21:53:14.502208 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.502379 kubelet[2635]: E0714 21:53:14.502365 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.502379 kubelet[2635]: W0714 21:53:14.502377 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.502567 kubelet[2635]: E0714 21:53:14.502463 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.503007 kubelet[2635]: E0714 21:53:14.502978 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.503007 kubelet[2635]: W0714 21:53:14.502994 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.503086 kubelet[2635]: E0714 21:53:14.503010 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.504127 kubelet[2635]: E0714 21:53:14.504101 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.504127 kubelet[2635]: W0714 21:53:14.504121 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.504225 kubelet[2635]: E0714 21:53:14.504136 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.515073 kubelet[2635]: E0714 21:53:14.515037 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:14.515073 kubelet[2635]: W0714 21:53:14.515060 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:14.515073 kubelet[2635]: E0714 21:53:14.515083 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:14.538603 containerd[1545]: time="2025-07-14T21:53:14.538494058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8ntct,Uid:ff0bdf70-5501-4d8d-9f9d-88089778f258,Namespace:calico-system,Attempt:0,} returns sandbox id \"270792f57f630dbd247a70cf8cfb7a44b4d0c2a02136e54f6c89ae599abf02ad\"" Jul 14 21:53:15.748062 kubelet[2635]: E0714 21:53:15.747844 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rxt6z" podUID="00f8cdae-e32e-4020-9c5f-9b5051044975" Jul 14 21:53:17.747878 kubelet[2635]: E0714 21:53:17.747788 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rxt6z" podUID="00f8cdae-e32e-4020-9c5f-9b5051044975" Jul 14 21:53:19.747884 kubelet[2635]: E0714 21:53:19.747647 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rxt6z" podUID="00f8cdae-e32e-4020-9c5f-9b5051044975" Jul 14 21:53:20.306120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2751782930.mount: Deactivated successfully. Jul 14 21:53:20.616967 containerd[1545]: time="2025-07-14T21:53:20.616838855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:20.617909 containerd[1545]: time="2025-07-14T21:53:20.617665435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 14 21:53:20.618552 containerd[1545]: time="2025-07-14T21:53:20.618520536Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:20.621422 containerd[1545]: time="2025-07-14T21:53:20.621386103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:20.622935 containerd[1545]: time="2025-07-14T21:53:20.622572068Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 6.449172968s" Jul 14 21:53:20.622935 containerd[1545]: time="2025-07-14T21:53:20.622614271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 14 21:53:20.623745 containerd[1545]: time="2025-07-14T21:53:20.623711470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 14 21:53:20.636115 containerd[1545]: time="2025-07-14T21:53:20.636077682Z" level=info msg="CreateContainer within sandbox \"831bd28b58b20128581f42334410c475afdc5eb9724b43a0237a4dbddb868256\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 14 21:53:20.664452 containerd[1545]: time="2025-07-14T21:53:20.664402883Z" level=info msg="CreateContainer within sandbox \"831bd28b58b20128581f42334410c475afdc5eb9724b43a0237a4dbddb868256\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f1ff6f8da9a10828ee247a790b67303bd3f081a08115c349ef3162ca97fd0421\"" Jul 14 21:53:20.666119 containerd[1545]: time="2025-07-14T21:53:20.665055170Z" level=info msg="StartContainer for \"f1ff6f8da9a10828ee247a790b67303bd3f081a08115c349ef3162ca97fd0421\"" Jul 14 21:53:20.725030 containerd[1545]: time="2025-07-14T21:53:20.724967369Z" level=info msg="StartContainer for \"f1ff6f8da9a10828ee247a790b67303bd3f081a08115c349ef3162ca97fd0421\" returns successfully" Jul 14 21:53:20.834243 kubelet[2635]: E0714 21:53:20.834170 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:20.907172 kubelet[2635]: E0714 21:53:20.907140 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.907518 kubelet[2635]: W0714 21:53:20.907329 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.907518 kubelet[2635]: E0714 21:53:20.907357 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:20.910150 kubelet[2635]: E0714 21:53:20.910124 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.910359 kubelet[2635]: W0714 21:53:20.910277 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.910359 kubelet[2635]: E0714 21:53:20.910303 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:20.912198 kubelet[2635]: E0714 21:53:20.912166 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.914170 kubelet[2635]: W0714 21:53:20.914067 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.914170 kubelet[2635]: E0714 21:53:20.914102 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:20.914637 kubelet[2635]: E0714 21:53:20.914545 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.914637 kubelet[2635]: W0714 21:53:20.914560 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.914637 kubelet[2635]: E0714 21:53:20.914573 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:20.918252 kubelet[2635]: E0714 21:53:20.918157 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.918252 kubelet[2635]: W0714 21:53:20.918174 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.918252 kubelet[2635]: E0714 21:53:20.918198 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:20.918698 kubelet[2635]: E0714 21:53:20.918637 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.918698 kubelet[2635]: W0714 21:53:20.918649 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.918698 kubelet[2635]: E0714 21:53:20.918661 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:20.919498 kubelet[2635]: E0714 21:53:20.919484 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.919585 kubelet[2635]: W0714 21:53:20.919572 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.919646 kubelet[2635]: E0714 21:53:20.919635 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:20.921104 kubelet[2635]: E0714 21:53:20.921085 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.921212 kubelet[2635]: W0714 21:53:20.921198 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.921759 kubelet[2635]: E0714 21:53:20.921271 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:20.922738 kubelet[2635]: E0714 21:53:20.922721 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.923030 kubelet[2635]: W0714 21:53:20.922821 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.923030 kubelet[2635]: E0714 21:53:20.922840 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:20.924497 kubelet[2635]: E0714 21:53:20.924268 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.924497 kubelet[2635]: W0714 21:53:20.924286 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.924497 kubelet[2635]: E0714 21:53:20.924300 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:20.927220 kubelet[2635]: E0714 21:53:20.927088 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.927220 kubelet[2635]: W0714 21:53:20.927104 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.927220 kubelet[2635]: E0714 21:53:20.927116 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:20.928454 kubelet[2635]: E0714 21:53:20.928436 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.928606 kubelet[2635]: W0714 21:53:20.928535 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.928606 kubelet[2635]: E0714 21:53:20.928552 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:20.929729 kubelet[2635]: E0714 21:53:20.928915 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.929902 kubelet[2635]: W0714 21:53:20.929823 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.929902 kubelet[2635]: E0714 21:53:20.929845 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:20.930929 kubelet[2635]: E0714 21:53:20.930462 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.931124 kubelet[2635]: W0714 21:53:20.931050 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.931124 kubelet[2635]: E0714 21:53:20.931071 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:20.933040 kubelet[2635]: E0714 21:53:20.932451 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.933040 kubelet[2635]: W0714 21:53:20.932467 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.933040 kubelet[2635]: E0714 21:53:20.932479 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:20.933415 kubelet[2635]: E0714 21:53:20.933400 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.935079 kubelet[2635]: W0714 21:53:20.935055 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.935325 kubelet[2635]: E0714 21:53:20.935169 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:20.935459 kubelet[2635]: E0714 21:53:20.935447 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.935521 kubelet[2635]: W0714 21:53:20.935510 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.935596 kubelet[2635]: E0714 21:53:20.935585 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:20.936218 kubelet[2635]: E0714 21:53:20.936200 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.936313 kubelet[2635]: W0714 21:53:20.936301 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.936449 kubelet[2635]: E0714 21:53:20.936379 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:20.936906 kubelet[2635]: E0714 21:53:20.936866 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.936906 kubelet[2635]: W0714 21:53:20.936885 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.936906 kubelet[2635]: E0714 21:53:20.936903 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:20.939510 kubelet[2635]: E0714 21:53:20.939088 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.939510 kubelet[2635]: W0714 21:53:20.939103 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.939510 kubelet[2635]: E0714 21:53:20.939283 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.939510 kubelet[2635]: W0714 21:53:20.939291 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.939510 kubelet[2635]: E0714 21:53:20.939403 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.939510 kubelet[2635]: W0714 21:53:20.939409 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.939510 kubelet[2635]: E0714 21:53:20.939418 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:20.939715 kubelet[2635]: E0714 21:53:20.939529 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.939715 kubelet[2635]: W0714 21:53:20.939535 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.939715 kubelet[2635]: E0714 21:53:20.939543 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:20.940263 kubelet[2635]: E0714 21:53:20.939770 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.940263 kubelet[2635]: W0714 21:53:20.939786 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.940263 kubelet[2635]: E0714 21:53:20.939796 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:20.940263 kubelet[2635]: E0714 21:53:20.939816 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:20.940263 kubelet[2635]: E0714 21:53:20.940207 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:20.940946 kubelet[2635]: E0714 21:53:20.940338 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.940946 kubelet[2635]: W0714 21:53:20.940348 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.940946 kubelet[2635]: E0714 21:53:20.940387 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:20.940946 kubelet[2635]: E0714 21:53:20.940572 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.940946 kubelet[2635]: W0714 21:53:20.940581 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.940946 kubelet[2635]: E0714 21:53:20.940593 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:20.940946 kubelet[2635]: E0714 21:53:20.940797 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.940946 kubelet[2635]: W0714 21:53:20.940806 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.940946 kubelet[2635]: E0714 21:53:20.940822 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:20.941168 kubelet[2635]: E0714 21:53:20.941047 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.941168 kubelet[2635]: W0714 21:53:20.941057 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.941168 kubelet[2635]: E0714 21:53:20.941067 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:20.941778 kubelet[2635]: E0714 21:53:20.941344 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.941778 kubelet[2635]: W0714 21:53:20.941397 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.941778 kubelet[2635]: E0714 21:53:20.941407 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:20.941778 kubelet[2635]: E0714 21:53:20.941601 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.941778 kubelet[2635]: W0714 21:53:20.941610 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.941778 kubelet[2635]: E0714 21:53:20.941618 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:20.941778 kubelet[2635]: E0714 21:53:20.941774 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.941778 kubelet[2635]: W0714 21:53:20.941781 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.941778 kubelet[2635]: E0714 21:53:20.941789 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:20.942575 kubelet[2635]: E0714 21:53:20.941954 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.942575 kubelet[2635]: W0714 21:53:20.941962 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.942575 kubelet[2635]: E0714 21:53:20.941970 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:20.943329 kubelet[2635]: E0714 21:53:20.943303 2635 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:53:20.943329 kubelet[2635]: W0714 21:53:20.943318 2635 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:53:20.943329 kubelet[2635]: E0714 21:53:20.943331 2635 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:53:21.585636 containerd[1545]: time="2025-07-14T21:53:21.585591625Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:21.586986 containerd[1545]: time="2025-07-14T21:53:21.586570573Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 14 21:53:21.587947 containerd[1545]: time="2025-07-14T21:53:21.587914186Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:21.590347 containerd[1545]: time="2025-07-14T21:53:21.590317432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:21.591693 containerd[1545]: time="2025-07-14T21:53:21.591569438Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 967.820566ms" Jul 14 21:53:21.591693 containerd[1545]: time="2025-07-14T21:53:21.591607321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 14 21:53:21.593737 containerd[1545]: time="2025-07-14T21:53:21.593699106Z" level=info msg="CreateContainer within sandbox \"270792f57f630dbd247a70cf8cfb7a44b4d0c2a02136e54f6c89ae599abf02ad\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 14 21:53:21.605367 containerd[1545]: time="2025-07-14T21:53:21.605219062Z" level=info msg="CreateContainer within sandbox \"270792f57f630dbd247a70cf8cfb7a44b4d0c2a02136e54f6c89ae599abf02ad\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"011c55749ff0e6781a8c25cc38af6a8d2cb153c868e33c262a6d8eaff05c0e0d\"" Jul 14 21:53:21.606115 containerd[1545]: time="2025-07-14T21:53:21.606084082Z" level=info msg="StartContainer for \"011c55749ff0e6781a8c25cc38af6a8d2cb153c868e33c262a6d8eaff05c0e0d\"" Jul 14 21:53:21.653886 containerd[1545]: time="2025-07-14T21:53:21.653844702Z" level=info msg="StartContainer for \"011c55749ff0e6781a8c25cc38af6a8d2cb153c868e33c262a6d8eaff05c0e0d\" returns successfully" Jul 14 21:53:21.721683 containerd[1545]: time="2025-07-14T21:53:21.717502862Z" level=info msg="shim disconnected" id=011c55749ff0e6781a8c25cc38af6a8d2cb153c868e33c262a6d8eaff05c0e0d namespace=k8s.io Jul 14 21:53:21.721683 containerd[1545]: time="2025-07-14T21:53:21.721457975Z" level=warning msg="cleaning up after shim disconnected" id=011c55749ff0e6781a8c25cc38af6a8d2cb153c868e33c262a6d8eaff05c0e0d namespace=k8s.io Jul 14 21:53:21.721683 containerd[1545]: time="2025-07-14T21:53:21.721472336Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:53:21.748168 kubelet[2635]: E0714 21:53:21.747916 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rxt6z" podUID="00f8cdae-e32e-4020-9c5f-9b5051044975" Jul 14 21:53:21.837127 kubelet[2635]: I0714 21:53:21.836997 2635 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 21:53:21.838478 kubelet[2635]: E0714 21:53:21.837339 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:21.839885 containerd[1545]: time="2025-07-14T21:53:21.839542256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 14 21:53:21.851811 kubelet[2635]: I0714 21:53:21.851224 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7fff8cd749-m77cz" podStartSLOduration=2.400759351 podStartE2EDuration="8.851207662s" podCreationTimestamp="2025-07-14 21:53:13 +0000 UTC" firstStartedPulling="2025-07-14 21:53:14.172888772 +0000 UTC m=+18.534413968" lastFinishedPulling="2025-07-14 21:53:20.623337083 +0000 UTC m=+24.984862279" observedRunningTime="2025-07-14 21:53:20.856532932 +0000 UTC m=+25.218058128" watchObservedRunningTime="2025-07-14 21:53:21.851207662 +0000 UTC m=+26.212732818" Jul 14 21:53:22.284490 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-011c55749ff0e6781a8c25cc38af6a8d2cb153c868e33c262a6d8eaff05c0e0d-rootfs.mount: Deactivated successfully. Jul 14 21:53:23.594451 containerd[1545]: time="2025-07-14T21:53:23.594403509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:23.595047 containerd[1545]: time="2025-07-14T21:53:23.594987386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 14 21:53:23.595495 containerd[1545]: time="2025-07-14T21:53:23.595448215Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:23.597422 containerd[1545]: time="2025-07-14T21:53:23.597391259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:23.598438 containerd[1545]: time="2025-07-14T21:53:23.598401003Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 1.758816865s" Jul 14 21:53:23.598480 containerd[1545]: time="2025-07-14T21:53:23.598440486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 14 21:53:23.601212 containerd[1545]: time="2025-07-14T21:53:23.601163459Z" level=info msg="CreateContainer within sandbox \"270792f57f630dbd247a70cf8cfb7a44b4d0c2a02136e54f6c89ae599abf02ad\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 14 21:53:23.615046 containerd[1545]: time="2025-07-14T21:53:23.614965659Z" level=info msg="CreateContainer within sandbox \"270792f57f630dbd247a70cf8cfb7a44b4d0c2a02136e54f6c89ae599abf02ad\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"14da28ee934330138c24d245a051aea70ae1b800e16ec761016728a4ecbd6404\"" Jul 14 21:53:23.615609 containerd[1545]: time="2025-07-14T21:53:23.615554856Z" level=info msg="StartContainer for \"14da28ee934330138c24d245a051aea70ae1b800e16ec761016728a4ecbd6404\"" Jul 14 21:53:23.670808 containerd[1545]: time="2025-07-14T21:53:23.670767974Z" level=info msg="StartContainer for \"14da28ee934330138c24d245a051aea70ae1b800e16ec761016728a4ecbd6404\" returns successfully" Jul 14 21:53:23.748476 kubelet[2635]: E0714 21:53:23.748153 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rxt6z" podUID="00f8cdae-e32e-4020-9c5f-9b5051044975" Jul 14 21:53:24.287196 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14da28ee934330138c24d245a051aea70ae1b800e16ec761016728a4ecbd6404-rootfs.mount: Deactivated successfully. Jul 14 21:53:24.293011 kubelet[2635]: I0714 21:53:24.292984 2635 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 14 21:53:24.297147 containerd[1545]: time="2025-07-14T21:53:24.296997711Z" level=info msg="shim disconnected" id=14da28ee934330138c24d245a051aea70ae1b800e16ec761016728a4ecbd6404 namespace=k8s.io Jul 14 21:53:24.297147 containerd[1545]: time="2025-07-14T21:53:24.297140239Z" level=warning msg="cleaning up after shim disconnected" id=14da28ee934330138c24d245a051aea70ae1b800e16ec761016728a4ecbd6404 namespace=k8s.io Jul 14 21:53:24.297309 containerd[1545]: time="2025-07-14T21:53:24.297150360Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:53:24.364483 kubelet[2635]: I0714 21:53:24.364253 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26gv2\" (UniqueName: \"kubernetes.io/projected/562abfa1-8d7b-4b3d-8c15-9e7f5730819d-kube-api-access-26gv2\") pod \"whisker-6dc9cb6569-s2rx5\" (UID: \"562abfa1-8d7b-4b3d-8c15-9e7f5730819d\") " pod="calico-system/whisker-6dc9cb6569-s2rx5" Jul 14 21:53:24.364483 kubelet[2635]: I0714 21:53:24.364299 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkv47\" (UniqueName: \"kubernetes.io/projected/ad88b19a-39ed-43ef-8d34-f24a9a9dd91a-kube-api-access-bkv47\") pod \"calico-apiserver-556f958c76-grqpf\" (UID: \"ad88b19a-39ed-43ef-8d34-f24a9a9dd91a\") " pod="calico-apiserver/calico-apiserver-556f958c76-grqpf" Jul 14 21:53:24.364483 kubelet[2635]: I0714 21:53:24.364316 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3a3148e-78bd-4afa-a1ab-e95fcbbdb088-tigera-ca-bundle\") pod \"calico-kube-controllers-58cbd4c654-nf4d4\" (UID: \"a3a3148e-78bd-4afa-a1ab-e95fcbbdb088\") " pod="calico-system/calico-kube-controllers-58cbd4c654-nf4d4" Jul 14 21:53:24.364483 kubelet[2635]: I0714 21:53:24.364341 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/562abfa1-8d7b-4b3d-8c15-9e7f5730819d-whisker-backend-key-pair\") pod \"whisker-6dc9cb6569-s2rx5\" (UID: \"562abfa1-8d7b-4b3d-8c15-9e7f5730819d\") " pod="calico-system/whisker-6dc9cb6569-s2rx5" Jul 14 21:53:24.364483 kubelet[2635]: I0714 21:53:24.364363 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/562abfa1-8d7b-4b3d-8c15-9e7f5730819d-whisker-ca-bundle\") pod \"whisker-6dc9cb6569-s2rx5\" (UID: \"562abfa1-8d7b-4b3d-8c15-9e7f5730819d\") " pod="calico-system/whisker-6dc9cb6569-s2rx5" Jul 14 21:53:24.364713 kubelet[2635]: I0714 21:53:24.364379 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ad88b19a-39ed-43ef-8d34-f24a9a9dd91a-calico-apiserver-certs\") pod \"calico-apiserver-556f958c76-grqpf\" (UID: \"ad88b19a-39ed-43ef-8d34-f24a9a9dd91a\") " pod="calico-apiserver/calico-apiserver-556f958c76-grqpf" Jul 14 21:53:24.364713 kubelet[2635]: I0714 21:53:24.364401 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxv8r\" (UniqueName: \"kubernetes.io/projected/a3a3148e-78bd-4afa-a1ab-e95fcbbdb088-kube-api-access-zxv8r\") pod \"calico-kube-controllers-58cbd4c654-nf4d4\" (UID: \"a3a3148e-78bd-4afa-a1ab-e95fcbbdb088\") " pod="calico-system/calico-kube-controllers-58cbd4c654-nf4d4" Jul 14 21:53:24.464673 kubelet[2635]: I0714 21:53:24.464630 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gm24\" (UniqueName: \"kubernetes.io/projected/77d9f340-8b20-4c7c-bc84-1d529d731237-kube-api-access-9gm24\") pod \"calico-apiserver-556f958c76-mmvvm\" (UID: \"77d9f340-8b20-4c7c-bc84-1d529d731237\") " pod="calico-apiserver/calico-apiserver-556f958c76-mmvvm" Jul 14 21:53:24.464673 kubelet[2635]: I0714 21:53:24.464680 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f83ff6d-3a8d-4195-8362-ba2ec00150cb-config\") pod \"goldmane-58fd7646b9-vz2ck\" (UID: \"5f83ff6d-3a8d-4195-8362-ba2ec00150cb\") " pod="calico-system/goldmane-58fd7646b9-vz2ck" Jul 14 21:53:24.464959 kubelet[2635]: I0714 21:53:24.464732 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9sls\" (UniqueName: \"kubernetes.io/projected/24e0aa04-85b8-423b-8338-45073fa49cb5-kube-api-access-f9sls\") pod \"coredns-7c65d6cfc9-rvt7n\" (UID: \"24e0aa04-85b8-423b-8338-45073fa49cb5\") " pod="kube-system/coredns-7c65d6cfc9-rvt7n" Jul 14 21:53:24.464959 kubelet[2635]: I0714 21:53:24.464753 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrfdl\" (UniqueName: \"kubernetes.io/projected/26306074-76d4-4748-a961-4f9fbf0ca63f-kube-api-access-qrfdl\") pod \"coredns-7c65d6cfc9-nqcm7\" (UID: \"26306074-76d4-4748-a961-4f9fbf0ca63f\") " pod="kube-system/coredns-7c65d6cfc9-nqcm7" Jul 14 21:53:24.465488 kubelet[2635]: I0714 21:53:24.465121 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f83ff6d-3a8d-4195-8362-ba2ec00150cb-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-vz2ck\" (UID: \"5f83ff6d-3a8d-4195-8362-ba2ec00150cb\") " pod="calico-system/goldmane-58fd7646b9-vz2ck" Jul 14 21:53:24.465488 kubelet[2635]: I0714 21:53:24.465178 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5f83ff6d-3a8d-4195-8362-ba2ec00150cb-goldmane-key-pair\") pod \"goldmane-58fd7646b9-vz2ck\" (UID: \"5f83ff6d-3a8d-4195-8362-ba2ec00150cb\") " pod="calico-system/goldmane-58fd7646b9-vz2ck" Jul 14 21:53:24.465488 kubelet[2635]: I0714 21:53:24.465218 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/77d9f340-8b20-4c7c-bc84-1d529d731237-calico-apiserver-certs\") pod \"calico-apiserver-556f958c76-mmvvm\" (UID: \"77d9f340-8b20-4c7c-bc84-1d529d731237\") " pod="calico-apiserver/calico-apiserver-556f958c76-mmvvm" Jul 14 21:53:24.465488 kubelet[2635]: I0714 21:53:24.465238 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24e0aa04-85b8-423b-8338-45073fa49cb5-config-volume\") pod \"coredns-7c65d6cfc9-rvt7n\" (UID: \"24e0aa04-85b8-423b-8338-45073fa49cb5\") " pod="kube-system/coredns-7c65d6cfc9-rvt7n" Jul 14 21:53:24.465488 kubelet[2635]: I0714 21:53:24.465269 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26306074-76d4-4748-a961-4f9fbf0ca63f-config-volume\") pod \"coredns-7c65d6cfc9-nqcm7\" (UID: \"26306074-76d4-4748-a961-4f9fbf0ca63f\") " pod="kube-system/coredns-7c65d6cfc9-nqcm7" Jul 14 21:53:24.465634 kubelet[2635]: I0714 21:53:24.465285 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvzfj\" (UniqueName: \"kubernetes.io/projected/5f83ff6d-3a8d-4195-8362-ba2ec00150cb-kube-api-access-bvzfj\") pod \"goldmane-58fd7646b9-vz2ck\" (UID: \"5f83ff6d-3a8d-4195-8362-ba2ec00150cb\") " pod="calico-system/goldmane-58fd7646b9-vz2ck" Jul 14 21:53:24.635574 kubelet[2635]: E0714 21:53:24.634951 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:24.635712 containerd[1545]: time="2025-07-14T21:53:24.635328520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nqcm7,Uid:26306074-76d4-4748-a961-4f9fbf0ca63f,Namespace:kube-system,Attempt:0,}" Jul 14 21:53:24.635982 containerd[1545]: time="2025-07-14T21:53:24.635346281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58cbd4c654-nf4d4,Uid:a3a3148e-78bd-4afa-a1ab-e95fcbbdb088,Namespace:calico-system,Attempt:0,}" Jul 14 21:53:24.648035 kubelet[2635]: E0714 21:53:24.646676 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:24.648129 containerd[1545]: time="2025-07-14T21:53:24.647694998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rvt7n,Uid:24e0aa04-85b8-423b-8338-45073fa49cb5,Namespace:kube-system,Attempt:0,}" Jul 14 21:53:24.650885 containerd[1545]: time="2025-07-14T21:53:24.649853690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556f958c76-mmvvm,Uid:77d9f340-8b20-4c7c-bc84-1d529d731237,Namespace:calico-apiserver,Attempt:0,}" Jul 14 21:53:24.652085 containerd[1545]: time="2025-07-14T21:53:24.651111167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-vz2ck,Uid:5f83ff6d-3a8d-4195-8362-ba2ec00150cb,Namespace:calico-system,Attempt:0,}" Jul 14 21:53:24.652484 containerd[1545]: time="2025-07-14T21:53:24.652455449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556f958c76-grqpf,Uid:ad88b19a-39ed-43ef-8d34-f24a9a9dd91a,Namespace:calico-apiserver,Attempt:0,}" Jul 14 21:53:24.655194 containerd[1545]: time="2025-07-14T21:53:24.655167415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dc9cb6569-s2rx5,Uid:562abfa1-8d7b-4b3d-8c15-9e7f5730819d,Namespace:calico-system,Attempt:0,}" Jul 14 21:53:24.854308 containerd[1545]: time="2025-07-14T21:53:24.854101084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 14 21:53:25.113218 containerd[1545]: time="2025-07-14T21:53:25.113165786Z" level=error msg="Failed to destroy network for sandbox \"0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.113606 containerd[1545]: time="2025-07-14T21:53:25.113575210Z" level=error msg="encountered an error cleaning up failed sandbox \"0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.113652 containerd[1545]: time="2025-07-14T21:53:25.113624773Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rvt7n,Uid:24e0aa04-85b8-423b-8338-45073fa49cb5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.116430 kubelet[2635]: E0714 21:53:25.116367 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.123059 kubelet[2635]: E0714 21:53:25.122886 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rvt7n" Jul 14 21:53:25.123059 kubelet[2635]: E0714 21:53:25.122955 2635 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-rvt7n" Jul 14 21:53:25.123059 kubelet[2635]: E0714 21:53:25.123048 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-rvt7n_kube-system(24e0aa04-85b8-423b-8338-45073fa49cb5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-rvt7n_kube-system(24e0aa04-85b8-423b-8338-45073fa49cb5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-rvt7n" podUID="24e0aa04-85b8-423b-8338-45073fa49cb5" Jul 14 21:53:25.129208 containerd[1545]: time="2025-07-14T21:53:25.129147528Z" level=error msg="Failed to destroy network for sandbox \"97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.129525 containerd[1545]: time="2025-07-14T21:53:25.129495789Z" level=error msg="encountered an error cleaning up failed sandbox \"97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.129584 containerd[1545]: time="2025-07-14T21:53:25.129548752Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-vz2ck,Uid:5f83ff6d-3a8d-4195-8362-ba2ec00150cb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.129783 kubelet[2635]: E0714 21:53:25.129737 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.129843 kubelet[2635]: E0714 21:53:25.129807 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-vz2ck" Jul 14 21:53:25.129843 kubelet[2635]: E0714 21:53:25.129827 2635 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-vz2ck" Jul 14 21:53:25.129914 kubelet[2635]: E0714 21:53:25.129863 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-vz2ck_calico-system(5f83ff6d-3a8d-4195-8362-ba2ec00150cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-vz2ck_calico-system(5f83ff6d-3a8d-4195-8362-ba2ec00150cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-vz2ck" podUID="5f83ff6d-3a8d-4195-8362-ba2ec00150cb" Jul 14 21:53:25.133025 containerd[1545]: time="2025-07-14T21:53:25.132959633Z" level=error msg="Failed to destroy network for sandbox \"5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.133267 containerd[1545]: time="2025-07-14T21:53:25.133219168Z" level=error msg="Failed to destroy network for sandbox \"cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.133574 containerd[1545]: time="2025-07-14T21:53:25.133539027Z" level=error msg="encountered an error cleaning up failed sandbox \"cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.133630 containerd[1545]: time="2025-07-14T21:53:25.133587030Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556f958c76-grqpf,Uid:ad88b19a-39ed-43ef-8d34-f24a9a9dd91a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.133849 kubelet[2635]: E0714 21:53:25.133778 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.133914 kubelet[2635]: E0714 21:53:25.133858 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-556f958c76-grqpf" Jul 14 21:53:25.133914 kubelet[2635]: E0714 21:53:25.133877 2635 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-556f958c76-grqpf" Jul 14 21:53:25.133966 kubelet[2635]: E0714 21:53:25.133916 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-556f958c76-grqpf_calico-apiserver(ad88b19a-39ed-43ef-8d34-f24a9a9dd91a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-556f958c76-grqpf_calico-apiserver(ad88b19a-39ed-43ef-8d34-f24a9a9dd91a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-556f958c76-grqpf" podUID="ad88b19a-39ed-43ef-8d34-f24a9a9dd91a" Jul 14 21:53:25.135126 containerd[1545]: time="2025-07-14T21:53:25.135082438Z" level=error msg="encountered an error cleaning up failed sandbox \"5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.135225 containerd[1545]: time="2025-07-14T21:53:25.135132401Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dc9cb6569-s2rx5,Uid:562abfa1-8d7b-4b3d-8c15-9e7f5730819d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.135322 kubelet[2635]: E0714 21:53:25.135287 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.135381 kubelet[2635]: E0714 21:53:25.135329 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6dc9cb6569-s2rx5" Jul 14 21:53:25.135381 kubelet[2635]: E0714 21:53:25.135347 2635 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6dc9cb6569-s2rx5" Jul 14 21:53:25.135506 kubelet[2635]: E0714 21:53:25.135374 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6dc9cb6569-s2rx5_calico-system(562abfa1-8d7b-4b3d-8c15-9e7f5730819d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6dc9cb6569-s2rx5_calico-system(562abfa1-8d7b-4b3d-8c15-9e7f5730819d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6dc9cb6569-s2rx5" podUID="562abfa1-8d7b-4b3d-8c15-9e7f5730819d" Jul 14 21:53:25.139931 containerd[1545]: time="2025-07-14T21:53:25.139808957Z" level=error msg="Failed to destroy network for sandbox \"d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.140276 containerd[1545]: time="2025-07-14T21:53:25.140245503Z" level=error msg="encountered an error cleaning up failed sandbox \"d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.140419 containerd[1545]: time="2025-07-14T21:53:25.140397071Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58cbd4c654-nf4d4,Uid:a3a3148e-78bd-4afa-a1ab-e95fcbbdb088,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.140868 kubelet[2635]: E0714 21:53:25.140759 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.140868 kubelet[2635]: E0714 21:53:25.140805 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58cbd4c654-nf4d4" Jul 14 21:53:25.140868 kubelet[2635]: E0714 21:53:25.140821 2635 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58cbd4c654-nf4d4" Jul 14 21:53:25.141023 kubelet[2635]: E0714 21:53:25.140849 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-58cbd4c654-nf4d4_calico-system(a3a3148e-78bd-4afa-a1ab-e95fcbbdb088)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-58cbd4c654-nf4d4_calico-system(a3a3148e-78bd-4afa-a1ab-e95fcbbdb088)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58cbd4c654-nf4d4" podUID="a3a3148e-78bd-4afa-a1ab-e95fcbbdb088" Jul 14 21:53:25.141410 containerd[1545]: time="2025-07-14T21:53:25.141372329Z" level=error msg="Failed to destroy network for sandbox \"b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.141840 containerd[1545]: time="2025-07-14T21:53:25.141808475Z" level=error msg="encountered an error cleaning up failed sandbox \"b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.141998 containerd[1545]: time="2025-07-14T21:53:25.141856718Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556f958c76-mmvvm,Uid:77d9f340-8b20-4c7c-bc84-1d529d731237,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.142078 kubelet[2635]: E0714 21:53:25.141986 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.142132 kubelet[2635]: E0714 21:53:25.142068 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-556f958c76-mmvvm" Jul 14 21:53:25.142132 kubelet[2635]: E0714 21:53:25.142090 2635 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-556f958c76-mmvvm" Jul 14 21:53:25.142210 kubelet[2635]: E0714 21:53:25.142125 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-556f958c76-mmvvm_calico-apiserver(77d9f340-8b20-4c7c-bc84-1d529d731237)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-556f958c76-mmvvm_calico-apiserver(77d9f340-8b20-4c7c-bc84-1d529d731237)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-556f958c76-mmvvm" podUID="77d9f340-8b20-4c7c-bc84-1d529d731237" Jul 14 21:53:25.149035 containerd[1545]: time="2025-07-14T21:53:25.148977978Z" level=error msg="Failed to destroy network for sandbox \"87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.149348 containerd[1545]: time="2025-07-14T21:53:25.149308717Z" level=error msg="encountered an error cleaning up failed sandbox \"87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.149391 containerd[1545]: time="2025-07-14T21:53:25.149355960Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nqcm7,Uid:26306074-76d4-4748-a961-4f9fbf0ca63f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.149551 kubelet[2635]: E0714 21:53:25.149516 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.149598 kubelet[2635]: E0714 21:53:25.149556 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-nqcm7" Jul 14 21:53:25.149598 kubelet[2635]: E0714 21:53:25.149573 2635 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-nqcm7" Jul 14 21:53:25.149650 kubelet[2635]: E0714 21:53:25.149607 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-nqcm7_kube-system(26306074-76d4-4748-a961-4f9fbf0ca63f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-nqcm7_kube-system(26306074-76d4-4748-a961-4f9fbf0ca63f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-nqcm7" podUID="26306074-76d4-4748-a961-4f9fbf0ca63f" Jul 14 21:53:25.612327 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad-shm.mount: Deactivated successfully. Jul 14 21:53:25.612477 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89-shm.mount: Deactivated successfully. Jul 14 21:53:25.752260 containerd[1545]: time="2025-07-14T21:53:25.752087066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rxt6z,Uid:00f8cdae-e32e-4020-9c5f-9b5051044975,Namespace:calico-system,Attempt:0,}" Jul 14 21:53:25.836624 containerd[1545]: time="2025-07-14T21:53:25.836445161Z" level=error msg="Failed to destroy network for sandbox \"de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.838742 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50-shm.mount: Deactivated successfully. Jul 14 21:53:25.840250 containerd[1545]: time="2025-07-14T21:53:25.839392215Z" level=error msg="encountered an error cleaning up failed sandbox \"de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.840250 containerd[1545]: time="2025-07-14T21:53:25.839439978Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rxt6z,Uid:00f8cdae-e32e-4020-9c5f-9b5051044975,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.840588 kubelet[2635]: E0714 21:53:25.840549 2635 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.841057 kubelet[2635]: E0714 21:53:25.840726 2635 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rxt6z" Jul 14 21:53:25.841057 kubelet[2635]: E0714 21:53:25.840750 2635 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rxt6z" Jul 14 21:53:25.841057 kubelet[2635]: E0714 21:53:25.840791 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rxt6z_calico-system(00f8cdae-e32e-4020-9c5f-9b5051044975)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rxt6z_calico-system(00f8cdae-e32e-4020-9c5f-9b5051044975)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rxt6z" podUID="00f8cdae-e32e-4020-9c5f-9b5051044975" Jul 14 21:53:25.856587 kubelet[2635]: I0714 21:53:25.856345 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" Jul 14 21:53:25.859051 containerd[1545]: time="2025-07-14T21:53:25.858679952Z" level=info msg="StopPodSandbox for \"cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf\"" Jul 14 21:53:25.861176 kubelet[2635]: I0714 21:53:25.861150 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" Jul 14 21:53:25.862540 containerd[1545]: time="2025-07-14T21:53:25.862134596Z" level=info msg="StopPodSandbox for \"0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8\"" Jul 14 21:53:25.862540 containerd[1545]: time="2025-07-14T21:53:25.862304446Z" level=info msg="Ensure that sandbox 0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8 in task-service has been cleanup successfully" Jul 14 21:53:25.862652 kubelet[2635]: I0714 21:53:25.862325 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" Jul 14 21:53:25.863048 containerd[1545]: time="2025-07-14T21:53:25.862991967Z" level=info msg="StopPodSandbox for \"de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50\"" Jul 14 21:53:25.863371 containerd[1545]: time="2025-07-14T21:53:25.863285024Z" level=info msg="Ensure that sandbox de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50 in task-service has been cleanup successfully" Jul 14 21:53:25.865315 containerd[1545]: time="2025-07-14T21:53:25.865271101Z" level=info msg="Ensure that sandbox cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf in task-service has been cleanup successfully" Jul 14 21:53:25.865692 kubelet[2635]: I0714 21:53:25.865651 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" Jul 14 21:53:25.866618 containerd[1545]: time="2025-07-14T21:53:25.866598939Z" level=info msg="StopPodSandbox for \"97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a\"" Jul 14 21:53:25.867571 kubelet[2635]: I0714 21:53:25.867538 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" Jul 14 21:53:25.867829 containerd[1545]: time="2025-07-14T21:53:25.867452550Z" level=info msg="Ensure that sandbox 97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a in task-service has been cleanup successfully" Jul 14 21:53:25.868384 containerd[1545]: time="2025-07-14T21:53:25.868093107Z" level=info msg="StopPodSandbox for \"d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad\"" Jul 14 21:53:25.868384 containerd[1545]: time="2025-07-14T21:53:25.868208154Z" level=info msg="Ensure that sandbox d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad in task-service has been cleanup successfully" Jul 14 21:53:25.870355 kubelet[2635]: I0714 21:53:25.870312 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" Jul 14 21:53:25.872042 containerd[1545]: time="2025-07-14T21:53:25.871859730Z" level=info msg="StopPodSandbox for \"b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d\"" Jul 14 21:53:25.872604 containerd[1545]: time="2025-07-14T21:53:25.872558731Z" level=info msg="Ensure that sandbox b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d in task-service has been cleanup successfully" Jul 14 21:53:25.876580 kubelet[2635]: I0714 21:53:25.876547 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" Jul 14 21:53:25.876660 kubelet[2635]: I0714 21:53:25.876593 2635 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" Jul 14 21:53:25.877854 containerd[1545]: time="2025-07-14T21:53:25.877824561Z" level=info msg="StopPodSandbox for \"5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561\"" Jul 14 21:53:25.878126 containerd[1545]: time="2025-07-14T21:53:25.878102338Z" level=info msg="Ensure that sandbox 5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561 in task-service has been cleanup successfully" Jul 14 21:53:25.879167 containerd[1545]: time="2025-07-14T21:53:25.879143159Z" level=info msg="StopPodSandbox for \"87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89\"" Jul 14 21:53:25.879401 containerd[1545]: time="2025-07-14T21:53:25.879380293Z" level=info msg="Ensure that sandbox 87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89 in task-service has been cleanup successfully" Jul 14 21:53:25.913952 containerd[1545]: time="2025-07-14T21:53:25.913902009Z" level=error msg="StopPodSandbox for \"cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf\" failed" error="failed to destroy network for sandbox \"cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.914424 kubelet[2635]: E0714 21:53:25.914374 2635 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" Jul 14 21:53:25.914504 kubelet[2635]: E0714 21:53:25.914442 2635 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf"} Jul 14 21:53:25.914534 kubelet[2635]: E0714 21:53:25.914512 2635 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ad88b19a-39ed-43ef-8d34-f24a9a9dd91a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 21:53:25.917069 kubelet[2635]: E0714 21:53:25.914536 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ad88b19a-39ed-43ef-8d34-f24a9a9dd91a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-556f958c76-grqpf" podUID="ad88b19a-39ed-43ef-8d34-f24a9a9dd91a" Jul 14 21:53:25.917069 kubelet[2635]: E0714 21:53:25.916511 2635 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" Jul 14 21:53:25.917069 kubelet[2635]: E0714 21:53:25.916554 2635 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d"} Jul 14 21:53:25.917069 kubelet[2635]: E0714 21:53:25.916578 2635 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"77d9f340-8b20-4c7c-bc84-1d529d731237\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 21:53:25.917236 containerd[1545]: time="2025-07-14T21:53:25.916006533Z" level=error msg="StopPodSandbox for \"b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d\" failed" error="failed to destroy network for sandbox \"b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.917274 kubelet[2635]: E0714 21:53:25.916599 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"77d9f340-8b20-4c7c-bc84-1d529d731237\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-556f958c76-mmvvm" podUID="77d9f340-8b20-4c7c-bc84-1d529d731237" Jul 14 21:53:25.939807 containerd[1545]: time="2025-07-14T21:53:25.939759174Z" level=error msg="StopPodSandbox for \"d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad\" failed" error="failed to destroy network for sandbox \"d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.940033 kubelet[2635]: E0714 21:53:25.939981 2635 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" Jul 14 21:53:25.940095 kubelet[2635]: E0714 21:53:25.940047 2635 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad"} Jul 14 21:53:25.940095 kubelet[2635]: E0714 21:53:25.940081 2635 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a3a3148e-78bd-4afa-a1ab-e95fcbbdb088\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 21:53:25.940187 kubelet[2635]: E0714 21:53:25.940103 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a3a3148e-78bd-4afa-a1ab-e95fcbbdb088\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58cbd4c654-nf4d4" podUID="a3a3148e-78bd-4afa-a1ab-e95fcbbdb088" Jul 14 21:53:25.950866 containerd[1545]: time="2025-07-14T21:53:25.950756583Z" level=error msg="StopPodSandbox for \"87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89\" failed" error="failed to destroy network for sandbox \"87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.951113 kubelet[2635]: E0714 21:53:25.951067 2635 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" Jul 14 21:53:25.951197 kubelet[2635]: E0714 21:53:25.951126 2635 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89"} Jul 14 21:53:25.951197 kubelet[2635]: E0714 21:53:25.951161 2635 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"26306074-76d4-4748-a961-4f9fbf0ca63f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 21:53:25.951197 kubelet[2635]: E0714 21:53:25.951183 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"26306074-76d4-4748-a961-4f9fbf0ca63f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-nqcm7" podUID="26306074-76d4-4748-a961-4f9fbf0ca63f" Jul 14 21:53:25.951336 containerd[1545]: time="2025-07-14T21:53:25.951279933Z" level=error msg="StopPodSandbox for \"97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a\" failed" error="failed to destroy network for sandbox \"97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.951475 kubelet[2635]: E0714 21:53:25.951447 2635 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" Jul 14 21:53:25.951525 kubelet[2635]: E0714 21:53:25.951495 2635 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a"} Jul 14 21:53:25.951551 kubelet[2635]: E0714 21:53:25.951523 2635 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5f83ff6d-3a8d-4195-8362-ba2ec00150cb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 21:53:25.951598 kubelet[2635]: E0714 21:53:25.951545 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5f83ff6d-3a8d-4195-8362-ba2ec00150cb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-vz2ck" podUID="5f83ff6d-3a8d-4195-8362-ba2ec00150cb" Jul 14 21:53:25.955567 containerd[1545]: time="2025-07-14T21:53:25.955468780Z" level=error msg="StopPodSandbox for \"0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8\" failed" error="failed to destroy network for sandbox \"0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.955752 kubelet[2635]: E0714 21:53:25.955702 2635 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" Jul 14 21:53:25.955752 kubelet[2635]: E0714 21:53:25.955744 2635 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8"} Jul 14 21:53:25.955841 kubelet[2635]: E0714 21:53:25.955771 2635 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"24e0aa04-85b8-423b-8338-45073fa49cb5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 21:53:25.955841 kubelet[2635]: E0714 21:53:25.955793 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"24e0aa04-85b8-423b-8338-45073fa49cb5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-rvt7n" podUID="24e0aa04-85b8-423b-8338-45073fa49cb5" Jul 14 21:53:25.956829 containerd[1545]: time="2025-07-14T21:53:25.956788098Z" level=error msg="StopPodSandbox for \"de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50\" failed" error="failed to destroy network for sandbox \"de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.956984 kubelet[2635]: E0714 21:53:25.956956 2635 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" Jul 14 21:53:25.957033 kubelet[2635]: E0714 21:53:25.956988 2635 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50"} Jul 14 21:53:25.957146 kubelet[2635]: E0714 21:53:25.957009 2635 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00f8cdae-e32e-4020-9c5f-9b5051044975\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 21:53:25.957192 kubelet[2635]: E0714 21:53:25.957157 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00f8cdae-e32e-4020-9c5f-9b5051044975\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rxt6z" podUID="00f8cdae-e32e-4020-9c5f-9b5051044975" Jul 14 21:53:25.959114 containerd[1545]: time="2025-07-14T21:53:25.959081674Z" level=error msg="StopPodSandbox for \"5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561\" failed" error="failed to destroy network for sandbox \"5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:53:25.959587 kubelet[2635]: E0714 21:53:25.959529 2635 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" Jul 14 21:53:25.959587 kubelet[2635]: E0714 21:53:25.959579 2635 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561"} Jul 14 21:53:25.959664 kubelet[2635]: E0714 21:53:25.959611 2635 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"562abfa1-8d7b-4b3d-8c15-9e7f5730819d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 21:53:25.959664 kubelet[2635]: E0714 21:53:25.959630 2635 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"562abfa1-8d7b-4b3d-8c15-9e7f5730819d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6dc9cb6569-s2rx5" podUID="562abfa1-8d7b-4b3d-8c15-9e7f5730819d" Jul 14 21:53:27.962664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3563600031.mount: Deactivated successfully. Jul 14 21:53:28.211708 containerd[1545]: time="2025-07-14T21:53:28.211651647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:28.212591 containerd[1545]: time="2025-07-14T21:53:28.212444529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 14 21:53:28.213736 containerd[1545]: time="2025-07-14T21:53:28.213565068Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:28.215945 containerd[1545]: time="2025-07-14T21:53:28.215399405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:28.227744 containerd[1545]: time="2025-07-14T21:53:28.227698616Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 3.373550889s" Jul 14 21:53:28.227744 containerd[1545]: time="2025-07-14T21:53:28.227742938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 14 21:53:28.235256 containerd[1545]: time="2025-07-14T21:53:28.235214134Z" level=info msg="CreateContainer within sandbox \"270792f57f630dbd247a70cf8cfb7a44b4d0c2a02136e54f6c89ae599abf02ad\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 14 21:53:28.246173 containerd[1545]: time="2025-07-14T21:53:28.246113950Z" level=info msg="CreateContainer within sandbox \"270792f57f630dbd247a70cf8cfb7a44b4d0c2a02136e54f6c89ae599abf02ad\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ace18c82e27271b6a784323da591f8f7700200ca1f18e4b9ab26be779ebdafc8\"" Jul 14 21:53:28.247221 containerd[1545]: time="2025-07-14T21:53:28.246768625Z" level=info msg="StartContainer for \"ace18c82e27271b6a784323da591f8f7700200ca1f18e4b9ab26be779ebdafc8\"" Jul 14 21:53:28.320271 containerd[1545]: time="2025-07-14T21:53:28.319798290Z" level=info msg="StartContainer for \"ace18c82e27271b6a784323da591f8f7700200ca1f18e4b9ab26be779ebdafc8\" returns successfully" Jul 14 21:53:28.540006 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 14 21:53:28.540153 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 14 21:53:28.659068 containerd[1545]: time="2025-07-14T21:53:28.658753427Z" level=info msg="StopPodSandbox for \"5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561\"" Jul 14 21:53:28.836189 containerd[1545]: 2025-07-14 21:53:28.741 [INFO][3931] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" Jul 14 21:53:28.836189 containerd[1545]: 2025-07-14 21:53:28.743 [INFO][3931] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" iface="eth0" netns="/var/run/netns/cni-55d2e597-ef66-670b-067c-502effad6af0" Jul 14 21:53:28.836189 containerd[1545]: 2025-07-14 21:53:28.744 [INFO][3931] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" iface="eth0" netns="/var/run/netns/cni-55d2e597-ef66-670b-067c-502effad6af0" Jul 14 21:53:28.836189 containerd[1545]: 2025-07-14 21:53:28.745 [INFO][3931] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" iface="eth0" netns="/var/run/netns/cni-55d2e597-ef66-670b-067c-502effad6af0" Jul 14 21:53:28.836189 containerd[1545]: 2025-07-14 21:53:28.745 [INFO][3931] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" Jul 14 21:53:28.836189 containerd[1545]: 2025-07-14 21:53:28.745 [INFO][3931] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" Jul 14 21:53:28.836189 containerd[1545]: 2025-07-14 21:53:28.815 [INFO][3942] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" HandleID="k8s-pod-network.5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" Workload="localhost-k8s-whisker--6dc9cb6569--s2rx5-eth0" Jul 14 21:53:28.836189 containerd[1545]: 2025-07-14 21:53:28.815 [INFO][3942] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:53:28.836189 containerd[1545]: 2025-07-14 21:53:28.816 [INFO][3942] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:53:28.836189 containerd[1545]: 2025-07-14 21:53:28.825 [WARNING][3942] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" HandleID="k8s-pod-network.5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" Workload="localhost-k8s-whisker--6dc9cb6569--s2rx5-eth0" Jul 14 21:53:28.836189 containerd[1545]: 2025-07-14 21:53:28.825 [INFO][3942] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" HandleID="k8s-pod-network.5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" Workload="localhost-k8s-whisker--6dc9cb6569--s2rx5-eth0" Jul 14 21:53:28.836189 containerd[1545]: 2025-07-14 21:53:28.826 [INFO][3942] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:53:28.836189 containerd[1545]: 2025-07-14 21:53:28.831 [INFO][3931] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" Jul 14 21:53:28.836563 containerd[1545]: time="2025-07-14T21:53:28.836398268Z" level=info msg="TearDown network for sandbox \"5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561\" successfully" Jul 14 21:53:28.836563 containerd[1545]: time="2025-07-14T21:53:28.836424709Z" level=info msg="StopPodSandbox for \"5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561\" returns successfully" Jul 14 21:53:28.963691 systemd[1]: run-netns-cni\x2d55d2e597\x2def66\x2d670b\x2d067c\x2d502effad6af0.mount: Deactivated successfully. Jul 14 21:53:29.016387 kubelet[2635]: I0714 21:53:29.016226 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/562abfa1-8d7b-4b3d-8c15-9e7f5730819d-whisker-backend-key-pair\") pod \"562abfa1-8d7b-4b3d-8c15-9e7f5730819d\" (UID: \"562abfa1-8d7b-4b3d-8c15-9e7f5730819d\") " Jul 14 21:53:29.016387 kubelet[2635]: I0714 21:53:29.016273 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26gv2\" (UniqueName: \"kubernetes.io/projected/562abfa1-8d7b-4b3d-8c15-9e7f5730819d-kube-api-access-26gv2\") pod \"562abfa1-8d7b-4b3d-8c15-9e7f5730819d\" (UID: \"562abfa1-8d7b-4b3d-8c15-9e7f5730819d\") " Jul 14 21:53:29.016387 kubelet[2635]: I0714 21:53:29.016302 2635 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/562abfa1-8d7b-4b3d-8c15-9e7f5730819d-whisker-ca-bundle\") pod \"562abfa1-8d7b-4b3d-8c15-9e7f5730819d\" (UID: \"562abfa1-8d7b-4b3d-8c15-9e7f5730819d\") " Jul 14 21:53:29.021027 kubelet[2635]: I0714 21:53:29.020975 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/562abfa1-8d7b-4b3d-8c15-9e7f5730819d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "562abfa1-8d7b-4b3d-8c15-9e7f5730819d" (UID: "562abfa1-8d7b-4b3d-8c15-9e7f5730819d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 14 21:53:29.023737 systemd[1]: var-lib-kubelet-pods-562abfa1\x2d8d7b\x2d4b3d\x2d8c15\x2d9e7f5730819d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d26gv2.mount: Deactivated successfully. Jul 14 21:53:29.024421 kubelet[2635]: I0714 21:53:29.024112 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/562abfa1-8d7b-4b3d-8c15-9e7f5730819d-kube-api-access-26gv2" (OuterVolumeSpecName: "kube-api-access-26gv2") pod "562abfa1-8d7b-4b3d-8c15-9e7f5730819d" (UID: "562abfa1-8d7b-4b3d-8c15-9e7f5730819d"). InnerVolumeSpecName "kube-api-access-26gv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 14 21:53:29.029386 kubelet[2635]: I0714 21:53:29.029337 2635 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/562abfa1-8d7b-4b3d-8c15-9e7f5730819d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "562abfa1-8d7b-4b3d-8c15-9e7f5730819d" (UID: "562abfa1-8d7b-4b3d-8c15-9e7f5730819d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 14 21:53:29.030788 systemd[1]: var-lib-kubelet-pods-562abfa1\x2d8d7b\x2d4b3d\x2d8c15\x2d9e7f5730819d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 14 21:53:29.117608 kubelet[2635]: I0714 21:53:29.117555 2635 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/562abfa1-8d7b-4b3d-8c15-9e7f5730819d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 14 21:53:29.117608 kubelet[2635]: I0714 21:53:29.117580 2635 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26gv2\" (UniqueName: \"kubernetes.io/projected/562abfa1-8d7b-4b3d-8c15-9e7f5730819d-kube-api-access-26gv2\") on node \"localhost\" DevicePath \"\"" Jul 14 21:53:29.117608 kubelet[2635]: I0714 21:53:29.117590 2635 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/562abfa1-8d7b-4b3d-8c15-9e7f5730819d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 14 21:53:29.201074 kubelet[2635]: I0714 21:53:29.200545 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8ntct" podStartSLOduration=1.511337239 podStartE2EDuration="15.200530753s" podCreationTimestamp="2025-07-14 21:53:14 +0000 UTC" firstStartedPulling="2025-07-14 21:53:14.539527916 +0000 UTC m=+18.901053072" lastFinishedPulling="2025-07-14 21:53:28.22872139 +0000 UTC m=+32.590246586" observedRunningTime="2025-07-14 21:53:28.901624599 +0000 UTC m=+33.263149795" watchObservedRunningTime="2025-07-14 21:53:29.200530753 +0000 UTC m=+33.562055949" Jul 14 21:53:29.420155 kubelet[2635]: I0714 21:53:29.420073 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bfe1fddf-7130-4ebe-a255-53c215dd1f3b-whisker-backend-key-pair\") pod \"whisker-5c49565bdc-9c9g7\" (UID: \"bfe1fddf-7130-4ebe-a255-53c215dd1f3b\") " pod="calico-system/whisker-5c49565bdc-9c9g7" Jul 14 21:53:29.420155 kubelet[2635]: I0714 21:53:29.420128 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvjx2\" (UniqueName: \"kubernetes.io/projected/bfe1fddf-7130-4ebe-a255-53c215dd1f3b-kube-api-access-qvjx2\") pod \"whisker-5c49565bdc-9c9g7\" (UID: \"bfe1fddf-7130-4ebe-a255-53c215dd1f3b\") " pod="calico-system/whisker-5c49565bdc-9c9g7" Jul 14 21:53:29.420155 kubelet[2635]: I0714 21:53:29.420147 2635 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfe1fddf-7130-4ebe-a255-53c215dd1f3b-whisker-ca-bundle\") pod \"whisker-5c49565bdc-9c9g7\" (UID: \"bfe1fddf-7130-4ebe-a255-53c215dd1f3b\") " pod="calico-system/whisker-5c49565bdc-9c9g7" Jul 14 21:53:29.755321 kubelet[2635]: I0714 21:53:29.755099 2635 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="562abfa1-8d7b-4b3d-8c15-9e7f5730819d" path="/var/lib/kubelet/pods/562abfa1-8d7b-4b3d-8c15-9e7f5730819d/volumes" Jul 14 21:53:29.828886 containerd[1545]: time="2025-07-14T21:53:29.828842489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c49565bdc-9c9g7,Uid:bfe1fddf-7130-4ebe-a255-53c215dd1f3b,Namespace:calico-system,Attempt:0,}" Jul 14 21:53:30.001244 systemd-networkd[1232]: cali740c0be0172: Link UP Jul 14 21:53:30.001435 systemd-networkd[1232]: cali740c0be0172: Gained carrier Jul 14 21:53:30.021104 containerd[1545]: 2025-07-14 21:53:29.860 [INFO][3965] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 21:53:30.021104 containerd[1545]: 2025-07-14 21:53:29.874 [INFO][3965] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5c49565bdc--9c9g7-eth0 whisker-5c49565bdc- calico-system bfe1fddf-7130-4ebe-a255-53c215dd1f3b 897 0 2025-07-14 21:53:29 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5c49565bdc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5c49565bdc-9c9g7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali740c0be0172 [] [] }} ContainerID="d0886d32ad3bd03d635fba60ee43e8576d3151925d1ffb7051cb8a285c991040" Namespace="calico-system" Pod="whisker-5c49565bdc-9c9g7" WorkloadEndpoint="localhost-k8s-whisker--5c49565bdc--9c9g7-" Jul 14 21:53:30.021104 containerd[1545]: 2025-07-14 21:53:29.874 [INFO][3965] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d0886d32ad3bd03d635fba60ee43e8576d3151925d1ffb7051cb8a285c991040" Namespace="calico-system" Pod="whisker-5c49565bdc-9c9g7" WorkloadEndpoint="localhost-k8s-whisker--5c49565bdc--9c9g7-eth0" Jul 14 21:53:30.021104 containerd[1545]: 2025-07-14 21:53:29.896 [INFO][3980] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d0886d32ad3bd03d635fba60ee43e8576d3151925d1ffb7051cb8a285c991040" HandleID="k8s-pod-network.d0886d32ad3bd03d635fba60ee43e8576d3151925d1ffb7051cb8a285c991040" Workload="localhost-k8s-whisker--5c49565bdc--9c9g7-eth0" Jul 14 21:53:30.021104 containerd[1545]: 2025-07-14 21:53:29.896 [INFO][3980] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d0886d32ad3bd03d635fba60ee43e8576d3151925d1ffb7051cb8a285c991040" HandleID="k8s-pod-network.d0886d32ad3bd03d635fba60ee43e8576d3151925d1ffb7051cb8a285c991040" Workload="localhost-k8s-whisker--5c49565bdc--9c9g7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000508b40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5c49565bdc-9c9g7", "timestamp":"2025-07-14 21:53:29.896288899 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 21:53:30.021104 containerd[1545]: 2025-07-14 21:53:29.896 [INFO][3980] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:53:30.021104 containerd[1545]: 2025-07-14 21:53:29.896 [INFO][3980] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:53:30.021104 containerd[1545]: 2025-07-14 21:53:29.896 [INFO][3980] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 21:53:30.021104 containerd[1545]: 2025-07-14 21:53:29.909 [INFO][3980] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d0886d32ad3bd03d635fba60ee43e8576d3151925d1ffb7051cb8a285c991040" host="localhost" Jul 14 21:53:30.021104 containerd[1545]: 2025-07-14 21:53:29.936 [INFO][3980] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 21:53:30.021104 containerd[1545]: 2025-07-14 21:53:29.947 [INFO][3980] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 21:53:30.021104 containerd[1545]: 2025-07-14 21:53:29.951 [INFO][3980] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 21:53:30.021104 containerd[1545]: 2025-07-14 21:53:29.956 [INFO][3980] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 21:53:30.021104 containerd[1545]: 2025-07-14 21:53:29.956 [INFO][3980] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d0886d32ad3bd03d635fba60ee43e8576d3151925d1ffb7051cb8a285c991040" host="localhost" Jul 14 21:53:30.021104 containerd[1545]: 2025-07-14 21:53:29.960 [INFO][3980] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d0886d32ad3bd03d635fba60ee43e8576d3151925d1ffb7051cb8a285c991040 Jul 14 21:53:30.021104 containerd[1545]: 2025-07-14 21:53:29.973 [INFO][3980] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d0886d32ad3bd03d635fba60ee43e8576d3151925d1ffb7051cb8a285c991040" host="localhost" Jul 14 21:53:30.021104 containerd[1545]: 2025-07-14 21:53:29.980 [INFO][3980] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d0886d32ad3bd03d635fba60ee43e8576d3151925d1ffb7051cb8a285c991040" host="localhost" Jul 14 21:53:30.021104 containerd[1545]: 2025-07-14 21:53:29.981 [INFO][3980] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d0886d32ad3bd03d635fba60ee43e8576d3151925d1ffb7051cb8a285c991040" host="localhost" Jul 14 21:53:30.021104 containerd[1545]: 2025-07-14 21:53:29.981 [INFO][3980] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:53:30.021104 containerd[1545]: 2025-07-14 21:53:29.981 [INFO][3980] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d0886d32ad3bd03d635fba60ee43e8576d3151925d1ffb7051cb8a285c991040" HandleID="k8s-pod-network.d0886d32ad3bd03d635fba60ee43e8576d3151925d1ffb7051cb8a285c991040" Workload="localhost-k8s-whisker--5c49565bdc--9c9g7-eth0" Jul 14 21:53:30.021682 containerd[1545]: 2025-07-14 21:53:29.986 [INFO][3965] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d0886d32ad3bd03d635fba60ee43e8576d3151925d1ffb7051cb8a285c991040" Namespace="calico-system" Pod="whisker-5c49565bdc-9c9g7" WorkloadEndpoint="localhost-k8s-whisker--5c49565bdc--9c9g7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5c49565bdc--9c9g7-eth0", GenerateName:"whisker-5c49565bdc-", Namespace:"calico-system", SelfLink:"", UID:"bfe1fddf-7130-4ebe-a255-53c215dd1f3b", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 53, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c49565bdc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5c49565bdc-9c9g7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali740c0be0172", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:53:30.021682 containerd[1545]: 2025-07-14 21:53:29.987 [INFO][3965] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d0886d32ad3bd03d635fba60ee43e8576d3151925d1ffb7051cb8a285c991040" Namespace="calico-system" Pod="whisker-5c49565bdc-9c9g7" WorkloadEndpoint="localhost-k8s-whisker--5c49565bdc--9c9g7-eth0" Jul 14 21:53:30.021682 containerd[1545]: 2025-07-14 21:53:29.987 [INFO][3965] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali740c0be0172 ContainerID="d0886d32ad3bd03d635fba60ee43e8576d3151925d1ffb7051cb8a285c991040" Namespace="calico-system" Pod="whisker-5c49565bdc-9c9g7" WorkloadEndpoint="localhost-k8s-whisker--5c49565bdc--9c9g7-eth0" Jul 14 21:53:30.021682 containerd[1545]: 2025-07-14 21:53:30.002 [INFO][3965] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d0886d32ad3bd03d635fba60ee43e8576d3151925d1ffb7051cb8a285c991040" Namespace="calico-system" Pod="whisker-5c49565bdc-9c9g7" WorkloadEndpoint="localhost-k8s-whisker--5c49565bdc--9c9g7-eth0" Jul 14 21:53:30.021682 containerd[1545]: 2025-07-14 21:53:30.003 [INFO][3965] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d0886d32ad3bd03d635fba60ee43e8576d3151925d1ffb7051cb8a285c991040" Namespace="calico-system" Pod="whisker-5c49565bdc-9c9g7" WorkloadEndpoint="localhost-k8s-whisker--5c49565bdc--9c9g7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5c49565bdc--9c9g7-eth0", GenerateName:"whisker-5c49565bdc-", Namespace:"calico-system", SelfLink:"", UID:"bfe1fddf-7130-4ebe-a255-53c215dd1f3b", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 53, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c49565bdc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d0886d32ad3bd03d635fba60ee43e8576d3151925d1ffb7051cb8a285c991040", Pod:"whisker-5c49565bdc-9c9g7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali740c0be0172", MAC:"3a:f9:d6:9b:9f:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:53:30.021682 containerd[1545]: 2025-07-14 21:53:30.018 [INFO][3965] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d0886d32ad3bd03d635fba60ee43e8576d3151925d1ffb7051cb8a285c991040" Namespace="calico-system" Pod="whisker-5c49565bdc-9c9g7" WorkloadEndpoint="localhost-k8s-whisker--5c49565bdc--9c9g7-eth0" Jul 14 21:53:30.133414 containerd[1545]: time="2025-07-14T21:53:30.132234549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:53:30.134065 containerd[1545]: time="2025-07-14T21:53:30.133761264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:53:30.134065 containerd[1545]: time="2025-07-14T21:53:30.133789066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:30.134065 containerd[1545]: time="2025-07-14T21:53:30.133884030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:30.154603 systemd-resolved[1437]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:53:30.171658 containerd[1545]: time="2025-07-14T21:53:30.171609817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c49565bdc-9c9g7,Uid:bfe1fddf-7130-4ebe-a255-53c215dd1f3b,Namespace:calico-system,Attempt:0,} returns sandbox id \"d0886d32ad3bd03d635fba60ee43e8576d3151925d1ffb7051cb8a285c991040\"" Jul 14 21:53:30.173279 containerd[1545]: time="2025-07-14T21:53:30.173257179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 14 21:53:31.057490 containerd[1545]: time="2025-07-14T21:53:31.057442325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:31.057490 containerd[1545]: time="2025-07-14T21:53:31.052624735Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 14 21:53:31.058089 containerd[1545]: time="2025-07-14T21:53:31.056695490Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 883.403509ms" Jul 14 21:53:31.058089 containerd[1545]: time="2025-07-14T21:53:31.057534050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 14 21:53:31.058478 containerd[1545]: time="2025-07-14T21:53:31.058451254Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:31.059683 containerd[1545]: time="2025-07-14T21:53:31.059641151Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:31.061196 containerd[1545]: time="2025-07-14T21:53:31.060272101Z" level=info msg="CreateContainer within sandbox \"d0886d32ad3bd03d635fba60ee43e8576d3151925d1ffb7051cb8a285c991040\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 14 21:53:31.068546 containerd[1545]: time="2025-07-14T21:53:31.068511736Z" level=info msg="CreateContainer within sandbox \"d0886d32ad3bd03d635fba60ee43e8576d3151925d1ffb7051cb8a285c991040\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a565720e72c85e0291206efc58d49cff9c36115fe1c97c5843e693a77b3edb73\"" Jul 14 21:53:31.069947 containerd[1545]: time="2025-07-14T21:53:31.069100404Z" level=info msg="StartContainer for \"a565720e72c85e0291206efc58d49cff9c36115fe1c97c5843e693a77b3edb73\"" Jul 14 21:53:31.123687 containerd[1545]: time="2025-07-14T21:53:31.123625978Z" level=info msg="StartContainer for \"a565720e72c85e0291206efc58d49cff9c36115fe1c97c5843e693a77b3edb73\" returns successfully" Jul 14 21:53:31.129278 containerd[1545]: time="2025-07-14T21:53:31.129236966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 14 21:53:31.165261 systemd-networkd[1232]: cali740c0be0172: Gained IPv6LL Jul 14 21:53:32.477369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2269231115.mount: Deactivated successfully. Jul 14 21:53:32.494562 containerd[1545]: time="2025-07-14T21:53:32.493751686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:32.495715 containerd[1545]: time="2025-07-14T21:53:32.495685376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 14 21:53:32.496518 containerd[1545]: time="2025-07-14T21:53:32.496492533Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:32.499895 containerd[1545]: time="2025-07-14T21:53:32.499858930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:32.501194 containerd[1545]: time="2025-07-14T21:53:32.501153390Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.371874501s" Jul 14 21:53:32.501301 containerd[1545]: time="2025-07-14T21:53:32.501285516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 14 21:53:32.504042 containerd[1545]: time="2025-07-14T21:53:32.503722429Z" level=info msg="CreateContainer within sandbox \"d0886d32ad3bd03d635fba60ee43e8576d3151925d1ffb7051cb8a285c991040\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 14 21:53:32.514285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3999752507.mount: Deactivated successfully. Jul 14 21:53:32.520519 containerd[1545]: time="2025-07-14T21:53:32.520475128Z" level=info msg="CreateContainer within sandbox \"d0886d32ad3bd03d635fba60ee43e8576d3151925d1ffb7051cb8a285c991040\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"5191efe3a664955ae9e93a8fd308aacc3f5be48ef979d29224e2bc8bf5e741e2\"" Jul 14 21:53:32.521029 containerd[1545]: time="2025-07-14T21:53:32.520951790Z" level=info msg="StartContainer for \"5191efe3a664955ae9e93a8fd308aacc3f5be48ef979d29224e2bc8bf5e741e2\"" Jul 14 21:53:32.574099 containerd[1545]: time="2025-07-14T21:53:32.574051817Z" level=info msg="StartContainer for \"5191efe3a664955ae9e93a8fd308aacc3f5be48ef979d29224e2bc8bf5e741e2\" returns successfully" Jul 14 21:53:34.733284 systemd[1]: Started sshd@7-10.0.0.64:22-10.0.0.1:40636.service - OpenSSH per-connection server daemon (10.0.0.1:40636). Jul 14 21:53:34.770651 sshd[4372]: Accepted publickey for core from 10.0.0.1 port 40636 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:53:34.772098 sshd[4372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:53:34.780006 systemd-logind[1524]: New session 8 of user core. Jul 14 21:53:34.789299 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 14 21:53:35.034282 sshd[4372]: pam_unix(sshd:session): session closed for user core Jul 14 21:53:35.038637 systemd[1]: sshd@7-10.0.0.64:22-10.0.0.1:40636.service: Deactivated successfully. Jul 14 21:53:35.040721 systemd-logind[1524]: Session 8 logged out. Waiting for processes to exit. Jul 14 21:53:35.040803 systemd[1]: session-8.scope: Deactivated successfully. Jul 14 21:53:35.042131 systemd-logind[1524]: Removed session 8. Jul 14 21:53:36.749274 containerd[1545]: time="2025-07-14T21:53:36.748065357Z" level=info msg="StopPodSandbox for \"d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad\"" Jul 14 21:53:36.749917 containerd[1545]: time="2025-07-14T21:53:36.749809310Z" level=info msg="StopPodSandbox for \"87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89\"" Jul 14 21:53:36.797449 kubelet[2635]: I0714 21:53:36.797053 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5c49565bdc-9c9g7" podStartSLOduration=5.467409256 podStartE2EDuration="7.797036629s" podCreationTimestamp="2025-07-14 21:53:29 +0000 UTC" firstStartedPulling="2025-07-14 21:53:30.172825517 +0000 UTC m=+34.534350713" lastFinishedPulling="2025-07-14 21:53:32.50245289 +0000 UTC m=+36.863978086" observedRunningTime="2025-07-14 21:53:32.919581554 +0000 UTC m=+37.281106750" watchObservedRunningTime="2025-07-14 21:53:36.797036629 +0000 UTC m=+41.158561825" Jul 14 21:53:36.834462 containerd[1545]: 2025-07-14 21:53:36.795 [INFO][4461] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" Jul 14 21:53:36.834462 containerd[1545]: 2025-07-14 21:53:36.795 [INFO][4461] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" iface="eth0" netns="/var/run/netns/cni-704c2c38-9c10-5a6f-a660-4f7a2e9ba4a0" Jul 14 21:53:36.834462 containerd[1545]: 2025-07-14 21:53:36.796 [INFO][4461] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" iface="eth0" netns="/var/run/netns/cni-704c2c38-9c10-5a6f-a660-4f7a2e9ba4a0" Jul 14 21:53:36.834462 containerd[1545]: 2025-07-14 21:53:36.797 [INFO][4461] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" iface="eth0" netns="/var/run/netns/cni-704c2c38-9c10-5a6f-a660-4f7a2e9ba4a0" Jul 14 21:53:36.834462 containerd[1545]: 2025-07-14 21:53:36.798 [INFO][4461] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" Jul 14 21:53:36.834462 containerd[1545]: 2025-07-14 21:53:36.798 [INFO][4461] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" Jul 14 21:53:36.834462 containerd[1545]: 2025-07-14 21:53:36.820 [INFO][4476] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" HandleID="k8s-pod-network.87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" Workload="localhost-k8s-coredns--7c65d6cfc9--nqcm7-eth0" Jul 14 21:53:36.834462 containerd[1545]: 2025-07-14 21:53:36.820 [INFO][4476] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:53:36.834462 containerd[1545]: 2025-07-14 21:53:36.820 [INFO][4476] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:53:36.834462 containerd[1545]: 2025-07-14 21:53:36.829 [WARNING][4476] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" HandleID="k8s-pod-network.87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" Workload="localhost-k8s-coredns--7c65d6cfc9--nqcm7-eth0" Jul 14 21:53:36.834462 containerd[1545]: 2025-07-14 21:53:36.829 [INFO][4476] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" HandleID="k8s-pod-network.87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" Workload="localhost-k8s-coredns--7c65d6cfc9--nqcm7-eth0" Jul 14 21:53:36.834462 containerd[1545]: 2025-07-14 21:53:36.830 [INFO][4476] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:53:36.834462 containerd[1545]: 2025-07-14 21:53:36.832 [INFO][4461] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" Jul 14 21:53:36.835186 containerd[1545]: time="2025-07-14T21:53:36.835036965Z" level=info msg="TearDown network for sandbox \"87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89\" successfully" Jul 14 21:53:36.835186 containerd[1545]: time="2025-07-14T21:53:36.835074367Z" level=info msg="StopPodSandbox for \"87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89\" returns successfully" Jul 14 21:53:36.837464 systemd[1]: run-netns-cni\x2d704c2c38\x2d9c10\x2d5a6f\x2da660\x2d4f7a2e9ba4a0.mount: Deactivated successfully. Jul 14 21:53:36.838074 kubelet[2635]: E0714 21:53:36.837463 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:36.839127 containerd[1545]: time="2025-07-14T21:53:36.838434466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nqcm7,Uid:26306074-76d4-4748-a961-4f9fbf0ca63f,Namespace:kube-system,Attempt:1,}" Jul 14 21:53:36.851248 containerd[1545]: 2025-07-14 21:53:36.801 [INFO][4460] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" Jul 14 21:53:36.851248 containerd[1545]: 2025-07-14 21:53:36.802 [INFO][4460] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" iface="eth0" netns="/var/run/netns/cni-ee070b85-1cb3-7e22-a85d-eaebf77d7ae4" Jul 14 21:53:36.851248 containerd[1545]: 2025-07-14 21:53:36.802 [INFO][4460] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" iface="eth0" netns="/var/run/netns/cni-ee070b85-1cb3-7e22-a85d-eaebf77d7ae4" Jul 14 21:53:36.851248 containerd[1545]: 2025-07-14 21:53:36.802 [INFO][4460] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" iface="eth0" netns="/var/run/netns/cni-ee070b85-1cb3-7e22-a85d-eaebf77d7ae4" Jul 14 21:53:36.851248 containerd[1545]: 2025-07-14 21:53:36.802 [INFO][4460] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" Jul 14 21:53:36.851248 containerd[1545]: 2025-07-14 21:53:36.802 [INFO][4460] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" Jul 14 21:53:36.851248 containerd[1545]: 2025-07-14 21:53:36.822 [INFO][4482] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" HandleID="k8s-pod-network.d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" Workload="localhost-k8s-calico--kube--controllers--58cbd4c654--nf4d4-eth0" Jul 14 21:53:36.851248 containerd[1545]: 2025-07-14 21:53:36.822 [INFO][4482] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:53:36.851248 containerd[1545]: 2025-07-14 21:53:36.830 [INFO][4482] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:53:36.851248 containerd[1545]: 2025-07-14 21:53:36.839 [WARNING][4482] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" HandleID="k8s-pod-network.d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" Workload="localhost-k8s-calico--kube--controllers--58cbd4c654--nf4d4-eth0" Jul 14 21:53:36.851248 containerd[1545]: 2025-07-14 21:53:36.840 [INFO][4482] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" HandleID="k8s-pod-network.d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" Workload="localhost-k8s-calico--kube--controllers--58cbd4c654--nf4d4-eth0" Jul 14 21:53:36.851248 containerd[1545]: 2025-07-14 21:53:36.847 [INFO][4482] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:53:36.851248 containerd[1545]: 2025-07-14 21:53:36.849 [INFO][4460] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" Jul 14 21:53:36.853774 containerd[1545]: time="2025-07-14T21:53:36.851773220Z" level=info msg="TearDown network for sandbox \"d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad\" successfully" Jul 14 21:53:36.853774 containerd[1545]: time="2025-07-14T21:53:36.851798301Z" level=info msg="StopPodSandbox for \"d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad\" returns successfully" Jul 14 21:53:36.853867 systemd[1]: run-netns-cni\x2dee070b85\x2d1cb3\x2d7e22\x2da85d\x2deaebf77d7ae4.mount: Deactivated successfully. Jul 14 21:53:36.854830 containerd[1545]: time="2025-07-14T21:53:36.854609097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58cbd4c654-nf4d4,Uid:a3a3148e-78bd-4afa-a1ab-e95fcbbdb088,Namespace:calico-system,Attempt:1,}" Jul 14 21:53:37.023114 systemd-networkd[1232]: cali09369dbf1dd: Link UP Jul 14 21:53:37.024484 systemd-networkd[1232]: cali09369dbf1dd: Gained carrier Jul 14 21:53:37.039698 containerd[1545]: 2025-07-14 21:53:36.939 [INFO][4498] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 21:53:37.039698 containerd[1545]: 2025-07-14 21:53:36.955 [INFO][4498] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--58cbd4c654--nf4d4-eth0 calico-kube-controllers-58cbd4c654- calico-system a3a3148e-78bd-4afa-a1ab-e95fcbbdb088 980 0 2025-07-14 21:53:14 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:58cbd4c654 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-58cbd4c654-nf4d4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali09369dbf1dd [] [] }} ContainerID="bf6894f86851d8f9779bcb3113fadadea4b82c265d95fdfc09c2ce590b44486f" Namespace="calico-system" Pod="calico-kube-controllers-58cbd4c654-nf4d4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58cbd4c654--nf4d4-" Jul 14 21:53:37.039698 containerd[1545]: 2025-07-14 21:53:36.955 [INFO][4498] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bf6894f86851d8f9779bcb3113fadadea4b82c265d95fdfc09c2ce590b44486f" Namespace="calico-system" Pod="calico-kube-controllers-58cbd4c654-nf4d4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58cbd4c654--nf4d4-eth0" Jul 14 21:53:37.039698 containerd[1545]: 2025-07-14 21:53:36.979 [INFO][4522] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bf6894f86851d8f9779bcb3113fadadea4b82c265d95fdfc09c2ce590b44486f" HandleID="k8s-pod-network.bf6894f86851d8f9779bcb3113fadadea4b82c265d95fdfc09c2ce590b44486f" Workload="localhost-k8s-calico--kube--controllers--58cbd4c654--nf4d4-eth0" Jul 14 21:53:37.039698 containerd[1545]: 2025-07-14 21:53:36.980 [INFO][4522] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bf6894f86851d8f9779bcb3113fadadea4b82c265d95fdfc09c2ce590b44486f" HandleID="k8s-pod-network.bf6894f86851d8f9779bcb3113fadadea4b82c265d95fdfc09c2ce590b44486f" Workload="localhost-k8s-calico--kube--controllers--58cbd4c654--nf4d4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c790), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-58cbd4c654-nf4d4", "timestamp":"2025-07-14 21:53:36.979970498 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 21:53:37.039698 containerd[1545]: 2025-07-14 21:53:36.980 [INFO][4522] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:53:37.039698 containerd[1545]: 2025-07-14 21:53:36.980 [INFO][4522] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:53:37.039698 containerd[1545]: 2025-07-14 21:53:36.980 [INFO][4522] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 21:53:37.039698 containerd[1545]: 2025-07-14 21:53:36.994 [INFO][4522] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bf6894f86851d8f9779bcb3113fadadea4b82c265d95fdfc09c2ce590b44486f" host="localhost" Jul 14 21:53:37.039698 containerd[1545]: 2025-07-14 21:53:36.998 [INFO][4522] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 21:53:37.039698 containerd[1545]: 2025-07-14 21:53:37.002 [INFO][4522] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 21:53:37.039698 containerd[1545]: 2025-07-14 21:53:37.004 [INFO][4522] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 21:53:37.039698 containerd[1545]: 2025-07-14 21:53:37.006 [INFO][4522] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 21:53:37.039698 containerd[1545]: 2025-07-14 21:53:37.006 [INFO][4522] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bf6894f86851d8f9779bcb3113fadadea4b82c265d95fdfc09c2ce590b44486f" host="localhost" Jul 14 21:53:37.039698 containerd[1545]: 2025-07-14 21:53:37.008 [INFO][4522] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bf6894f86851d8f9779bcb3113fadadea4b82c265d95fdfc09c2ce590b44486f Jul 14 21:53:37.039698 containerd[1545]: 2025-07-14 21:53:37.011 [INFO][4522] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bf6894f86851d8f9779bcb3113fadadea4b82c265d95fdfc09c2ce590b44486f" host="localhost" Jul 14 21:53:37.039698 containerd[1545]: 2025-07-14 21:53:37.017 [INFO][4522] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.bf6894f86851d8f9779bcb3113fadadea4b82c265d95fdfc09c2ce590b44486f" host="localhost" Jul 14 21:53:37.039698 containerd[1545]: 2025-07-14 21:53:37.017 [INFO][4522] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.bf6894f86851d8f9779bcb3113fadadea4b82c265d95fdfc09c2ce590b44486f" host="localhost" Jul 14 21:53:37.039698 containerd[1545]: 2025-07-14 21:53:37.017 [INFO][4522] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:53:37.039698 containerd[1545]: 2025-07-14 21:53:37.017 [INFO][4522] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="bf6894f86851d8f9779bcb3113fadadea4b82c265d95fdfc09c2ce590b44486f" HandleID="k8s-pod-network.bf6894f86851d8f9779bcb3113fadadea4b82c265d95fdfc09c2ce590b44486f" Workload="localhost-k8s-calico--kube--controllers--58cbd4c654--nf4d4-eth0" Jul 14 21:53:37.040297 containerd[1545]: 2025-07-14 21:53:37.020 [INFO][4498] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bf6894f86851d8f9779bcb3113fadadea4b82c265d95fdfc09c2ce590b44486f" Namespace="calico-system" Pod="calico-kube-controllers-58cbd4c654-nf4d4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58cbd4c654--nf4d4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58cbd4c654--nf4d4-eth0", GenerateName:"calico-kube-controllers-58cbd4c654-", Namespace:"calico-system", SelfLink:"", UID:"a3a3148e-78bd-4afa-a1ab-e95fcbbdb088", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 53, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58cbd4c654", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-58cbd4c654-nf4d4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali09369dbf1dd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:53:37.040297 containerd[1545]: 2025-07-14 21:53:37.020 [INFO][4498] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="bf6894f86851d8f9779bcb3113fadadea4b82c265d95fdfc09c2ce590b44486f" Namespace="calico-system" Pod="calico-kube-controllers-58cbd4c654-nf4d4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58cbd4c654--nf4d4-eth0" Jul 14 21:53:37.040297 containerd[1545]: 2025-07-14 21:53:37.020 [INFO][4498] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09369dbf1dd ContainerID="bf6894f86851d8f9779bcb3113fadadea4b82c265d95fdfc09c2ce590b44486f" Namespace="calico-system" Pod="calico-kube-controllers-58cbd4c654-nf4d4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58cbd4c654--nf4d4-eth0" Jul 14 21:53:37.040297 containerd[1545]: 2025-07-14 21:53:37.023 [INFO][4498] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bf6894f86851d8f9779bcb3113fadadea4b82c265d95fdfc09c2ce590b44486f" Namespace="calico-system" Pod="calico-kube-controllers-58cbd4c654-nf4d4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58cbd4c654--nf4d4-eth0" Jul 14 21:53:37.040297 containerd[1545]: 2025-07-14 21:53:37.025 [INFO][4498] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bf6894f86851d8f9779bcb3113fadadea4b82c265d95fdfc09c2ce590b44486f" Namespace="calico-system" Pod="calico-kube-controllers-58cbd4c654-nf4d4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58cbd4c654--nf4d4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58cbd4c654--nf4d4-eth0", GenerateName:"calico-kube-controllers-58cbd4c654-", Namespace:"calico-system", SelfLink:"", UID:"a3a3148e-78bd-4afa-a1ab-e95fcbbdb088", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 53, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58cbd4c654", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bf6894f86851d8f9779bcb3113fadadea4b82c265d95fdfc09c2ce590b44486f", Pod:"calico-kube-controllers-58cbd4c654-nf4d4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali09369dbf1dd", MAC:"16:cf:39:d7:b6:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:53:37.040297 containerd[1545]: 2025-07-14 21:53:37.037 [INFO][4498] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bf6894f86851d8f9779bcb3113fadadea4b82c265d95fdfc09c2ce590b44486f" Namespace="calico-system" Pod="calico-kube-controllers-58cbd4c654-nf4d4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58cbd4c654--nf4d4-eth0" Jul 14 21:53:37.054683 containerd[1545]: time="2025-07-14T21:53:37.054465171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:53:37.054683 containerd[1545]: time="2025-07-14T21:53:37.054530214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:53:37.054683 containerd[1545]: time="2025-07-14T21:53:37.054541694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:37.054683 containerd[1545]: time="2025-07-14T21:53:37.054638578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:37.095204 systemd-resolved[1437]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:53:37.114886 containerd[1545]: time="2025-07-14T21:53:37.114834732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58cbd4c654-nf4d4,Uid:a3a3148e-78bd-4afa-a1ab-e95fcbbdb088,Namespace:calico-system,Attempt:1,} returns sandbox id \"bf6894f86851d8f9779bcb3113fadadea4b82c265d95fdfc09c2ce590b44486f\"" Jul 14 21:53:37.117525 containerd[1545]: time="2025-07-14T21:53:37.117502879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 14 21:53:37.203094 systemd-networkd[1232]: calib7a41df8775: Link UP Jul 14 21:53:37.203449 systemd-networkd[1232]: calib7a41df8775: Gained carrier Jul 14 21:53:37.224669 containerd[1545]: 2025-07-14 21:53:36.941 [INFO][4492] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 21:53:37.224669 containerd[1545]: 2025-07-14 21:53:36.963 [INFO][4492] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--nqcm7-eth0 coredns-7c65d6cfc9- kube-system 26306074-76d4-4748-a961-4f9fbf0ca63f 979 0 2025-07-14 21:53:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-nqcm7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib7a41df8775 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="02e56152ee8072ff7a64f88e2e23abf13baec4e3917b92307b7178312fa72aeb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nqcm7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nqcm7-" Jul 14 21:53:37.224669 containerd[1545]: 2025-07-14 21:53:36.963 [INFO][4492] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="02e56152ee8072ff7a64f88e2e23abf13baec4e3917b92307b7178312fa72aeb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nqcm7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nqcm7-eth0" Jul 14 21:53:37.224669 containerd[1545]: 2025-07-14 21:53:36.986 [INFO][4528] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02e56152ee8072ff7a64f88e2e23abf13baec4e3917b92307b7178312fa72aeb" HandleID="k8s-pod-network.02e56152ee8072ff7a64f88e2e23abf13baec4e3917b92307b7178312fa72aeb" Workload="localhost-k8s-coredns--7c65d6cfc9--nqcm7-eth0" Jul 14 21:53:37.224669 containerd[1545]: 2025-07-14 21:53:36.986 [INFO][4528] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="02e56152ee8072ff7a64f88e2e23abf13baec4e3917b92307b7178312fa72aeb" HandleID="k8s-pod-network.02e56152ee8072ff7a64f88e2e23abf13baec4e3917b92307b7178312fa72aeb" Workload="localhost-k8s-coredns--7c65d6cfc9--nqcm7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d730), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-nqcm7", "timestamp":"2025-07-14 21:53:36.986386084 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 21:53:37.224669 containerd[1545]: 2025-07-14 21:53:36.986 [INFO][4528] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:53:37.224669 containerd[1545]: 2025-07-14 21:53:37.017 [INFO][4528] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:53:37.224669 containerd[1545]: 2025-07-14 21:53:37.017 [INFO][4528] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 21:53:37.224669 containerd[1545]: 2025-07-14 21:53:37.095 [INFO][4528] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.02e56152ee8072ff7a64f88e2e23abf13baec4e3917b92307b7178312fa72aeb" host="localhost" Jul 14 21:53:37.224669 containerd[1545]: 2025-07-14 21:53:37.100 [INFO][4528] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 21:53:37.224669 containerd[1545]: 2025-07-14 21:53:37.104 [INFO][4528] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 21:53:37.224669 containerd[1545]: 2025-07-14 21:53:37.107 [INFO][4528] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 21:53:37.224669 containerd[1545]: 2025-07-14 21:53:37.110 [INFO][4528] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 21:53:37.224669 containerd[1545]: 2025-07-14 21:53:37.110 [INFO][4528] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.02e56152ee8072ff7a64f88e2e23abf13baec4e3917b92307b7178312fa72aeb" host="localhost" Jul 14 21:53:37.224669 containerd[1545]: 2025-07-14 21:53:37.111 [INFO][4528] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.02e56152ee8072ff7a64f88e2e23abf13baec4e3917b92307b7178312fa72aeb Jul 14 21:53:37.224669 containerd[1545]: 2025-07-14 21:53:37.186 [INFO][4528] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.02e56152ee8072ff7a64f88e2e23abf13baec4e3917b92307b7178312fa72aeb" host="localhost" Jul 14 21:53:37.224669 containerd[1545]: 2025-07-14 21:53:37.199 [INFO][4528] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.02e56152ee8072ff7a64f88e2e23abf13baec4e3917b92307b7178312fa72aeb" host="localhost" Jul 14 21:53:37.224669 containerd[1545]: 2025-07-14 21:53:37.199 [INFO][4528] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.02e56152ee8072ff7a64f88e2e23abf13baec4e3917b92307b7178312fa72aeb" host="localhost" Jul 14 21:53:37.224669 containerd[1545]: 2025-07-14 21:53:37.199 [INFO][4528] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:53:37.224669 containerd[1545]: 2025-07-14 21:53:37.199 [INFO][4528] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="02e56152ee8072ff7a64f88e2e23abf13baec4e3917b92307b7178312fa72aeb" HandleID="k8s-pod-network.02e56152ee8072ff7a64f88e2e23abf13baec4e3917b92307b7178312fa72aeb" Workload="localhost-k8s-coredns--7c65d6cfc9--nqcm7-eth0" Jul 14 21:53:37.225331 containerd[1545]: 2025-07-14 21:53:37.201 [INFO][4492] cni-plugin/k8s.go 418: Populated endpoint ContainerID="02e56152ee8072ff7a64f88e2e23abf13baec4e3917b92307b7178312fa72aeb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nqcm7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nqcm7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--nqcm7-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"26306074-76d4-4748-a961-4f9fbf0ca63f", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 53, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-nqcm7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib7a41df8775", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:53:37.225331 containerd[1545]: 2025-07-14 21:53:37.201 [INFO][4492] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="02e56152ee8072ff7a64f88e2e23abf13baec4e3917b92307b7178312fa72aeb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nqcm7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nqcm7-eth0" Jul 14 21:53:37.225331 containerd[1545]: 2025-07-14 21:53:37.201 [INFO][4492] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib7a41df8775 ContainerID="02e56152ee8072ff7a64f88e2e23abf13baec4e3917b92307b7178312fa72aeb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nqcm7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nqcm7-eth0" Jul 14 21:53:37.225331 containerd[1545]: 2025-07-14 21:53:37.203 [INFO][4492] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="02e56152ee8072ff7a64f88e2e23abf13baec4e3917b92307b7178312fa72aeb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nqcm7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nqcm7-eth0" Jul 14 21:53:37.225331 containerd[1545]: 2025-07-14 21:53:37.204 [INFO][4492] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="02e56152ee8072ff7a64f88e2e23abf13baec4e3917b92307b7178312fa72aeb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nqcm7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nqcm7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--nqcm7-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"26306074-76d4-4748-a961-4f9fbf0ca63f", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 53, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"02e56152ee8072ff7a64f88e2e23abf13baec4e3917b92307b7178312fa72aeb", Pod:"coredns-7c65d6cfc9-nqcm7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib7a41df8775", MAC:"a6:de:09:35:9d:38", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:53:37.225331 containerd[1545]: 2025-07-14 21:53:37.222 [INFO][4492] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="02e56152ee8072ff7a64f88e2e23abf13baec4e3917b92307b7178312fa72aeb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nqcm7" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--nqcm7-eth0" Jul 14 21:53:37.240792 containerd[1545]: time="2025-07-14T21:53:37.240691980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:53:37.240792 containerd[1545]: time="2025-07-14T21:53:37.240745022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:53:37.240792 containerd[1545]: time="2025-07-14T21:53:37.240764663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:37.241323 containerd[1545]: time="2025-07-14T21:53:37.240856306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:37.265673 systemd-resolved[1437]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:53:37.283480 containerd[1545]: time="2025-07-14T21:53:37.283371865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nqcm7,Uid:26306074-76d4-4748-a961-4f9fbf0ca63f,Namespace:kube-system,Attempt:1,} returns sandbox id \"02e56152ee8072ff7a64f88e2e23abf13baec4e3917b92307b7178312fa72aeb\"" Jul 14 21:53:37.284899 kubelet[2635]: E0714 21:53:37.284871 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:37.287735 containerd[1545]: time="2025-07-14T21:53:37.287671199Z" level=info msg="CreateContainer within sandbox \"02e56152ee8072ff7a64f88e2e23abf13baec4e3917b92307b7178312fa72aeb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 21:53:37.313425 containerd[1545]: time="2025-07-14T21:53:37.313301355Z" level=info msg="CreateContainer within sandbox \"02e56152ee8072ff7a64f88e2e23abf13baec4e3917b92307b7178312fa72aeb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d694a3e43309b08fca9d90cbe93c4df66ac02a7b1d98303cb5edf3c1db4145aa\"" Jul 14 21:53:37.314090 containerd[1545]: time="2025-07-14T21:53:37.313843057Z" level=info msg="StartContainer for \"d694a3e43309b08fca9d90cbe93c4df66ac02a7b1d98303cb5edf3c1db4145aa\"" Jul 14 21:53:37.372892 containerd[1545]: time="2025-07-14T21:53:37.372853003Z" level=info msg="StartContainer for \"d694a3e43309b08fca9d90cbe93c4df66ac02a7b1d98303cb5edf3c1db4145aa\" returns successfully" Jul 14 21:53:37.914661 kubelet[2635]: E0714 21:53:37.914512 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:37.929507 kubelet[2635]: I0714 21:53:37.926196 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-nqcm7" podStartSLOduration=36.926180213 podStartE2EDuration="36.926180213s" podCreationTimestamp="2025-07-14 21:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:53:37.924673832 +0000 UTC m=+42.286199028" watchObservedRunningTime="2025-07-14 21:53:37.926180213 +0000 UTC m=+42.287705369" Jul 14 21:53:38.333131 systemd-networkd[1232]: cali09369dbf1dd: Gained IPv6LL Jul 14 21:53:38.461170 systemd-networkd[1232]: calib7a41df8775: Gained IPv6LL Jul 14 21:53:38.724162 containerd[1545]: time="2025-07-14T21:53:38.724118596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:38.724839 containerd[1545]: time="2025-07-14T21:53:38.724813863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 14 21:53:38.725721 containerd[1545]: time="2025-07-14T21:53:38.725697058Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:38.728037 containerd[1545]: time="2025-07-14T21:53:38.727812621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:38.728905 containerd[1545]: time="2025-07-14T21:53:38.728835382Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 1.611301221s" Jul 14 21:53:38.729367 containerd[1545]: time="2025-07-14T21:53:38.729345522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 14 21:53:38.738683 containerd[1545]: time="2025-07-14T21:53:38.738647409Z" level=info msg="CreateContainer within sandbox \"bf6894f86851d8f9779bcb3113fadadea4b82c265d95fdfc09c2ce590b44486f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 14 21:53:38.749362 containerd[1545]: time="2025-07-14T21:53:38.748196585Z" level=info msg="StopPodSandbox for \"97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a\"" Jul 14 21:53:38.749362 containerd[1545]: time="2025-07-14T21:53:38.748646043Z" level=info msg="StopPodSandbox for \"0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8\"" Jul 14 21:53:38.750836 containerd[1545]: time="2025-07-14T21:53:38.749713925Z" level=info msg="CreateContainer within sandbox \"bf6894f86851d8f9779bcb3113fadadea4b82c265d95fdfc09c2ce590b44486f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"fb072fa674f2a18b698913476f9697454d4376e5fe45e2a88db5a61ecb14f26d\"" Jul 14 21:53:38.751148 containerd[1545]: time="2025-07-14T21:53:38.750955574Z" level=info msg="StartContainer for \"fb072fa674f2a18b698913476f9697454d4376e5fe45e2a88db5a61ecb14f26d\"" Jul 14 21:53:38.836309 containerd[1545]: time="2025-07-14T21:53:38.836273059Z" level=info msg="StartContainer for \"fb072fa674f2a18b698913476f9697454d4376e5fe45e2a88db5a61ecb14f26d\" returns successfully" Jul 14 21:53:38.850410 containerd[1545]: 2025-07-14 21:53:38.800 [INFO][4758] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" Jul 14 21:53:38.850410 containerd[1545]: 2025-07-14 21:53:38.801 [INFO][4758] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" iface="eth0" netns="/var/run/netns/cni-cac81250-a56b-4a05-ee89-a595809ec261" Jul 14 21:53:38.850410 containerd[1545]: 2025-07-14 21:53:38.801 [INFO][4758] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" iface="eth0" netns="/var/run/netns/cni-cac81250-a56b-4a05-ee89-a595809ec261" Jul 14 21:53:38.850410 containerd[1545]: 2025-07-14 21:53:38.801 [INFO][4758] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" iface="eth0" netns="/var/run/netns/cni-cac81250-a56b-4a05-ee89-a595809ec261" Jul 14 21:53:38.850410 containerd[1545]: 2025-07-14 21:53:38.801 [INFO][4758] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" Jul 14 21:53:38.850410 containerd[1545]: 2025-07-14 21:53:38.801 [INFO][4758] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" Jul 14 21:53:38.850410 containerd[1545]: 2025-07-14 21:53:38.826 [INFO][4794] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" HandleID="k8s-pod-network.0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" Workload="localhost-k8s-coredns--7c65d6cfc9--rvt7n-eth0" Jul 14 21:53:38.850410 containerd[1545]: 2025-07-14 21:53:38.826 [INFO][4794] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:53:38.850410 containerd[1545]: 2025-07-14 21:53:38.826 [INFO][4794] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:53:38.850410 containerd[1545]: 2025-07-14 21:53:38.834 [WARNING][4794] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" HandleID="k8s-pod-network.0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" Workload="localhost-k8s-coredns--7c65d6cfc9--rvt7n-eth0" Jul 14 21:53:38.850410 containerd[1545]: 2025-07-14 21:53:38.834 [INFO][4794] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" HandleID="k8s-pod-network.0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" Workload="localhost-k8s-coredns--7c65d6cfc9--rvt7n-eth0" Jul 14 21:53:38.850410 containerd[1545]: 2025-07-14 21:53:38.840 [INFO][4794] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:53:38.850410 containerd[1545]: 2025-07-14 21:53:38.846 [INFO][4758] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" Jul 14 21:53:38.855329 systemd[1]: run-netns-cni\x2dcac81250\x2da56b\x2d4a05\x2dee89\x2da595809ec261.mount: Deactivated successfully. Jul 14 21:53:38.855821 containerd[1545]: time="2025-07-14T21:53:38.855791548Z" level=info msg="TearDown network for sandbox \"0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8\" successfully" Jul 14 21:53:38.855865 containerd[1545]: time="2025-07-14T21:53:38.855825350Z" level=info msg="StopPodSandbox for \"0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8\" returns successfully" Jul 14 21:53:38.856134 kubelet[2635]: E0714 21:53:38.856108 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:38.856724 containerd[1545]: time="2025-07-14T21:53:38.856680663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rvt7n,Uid:24e0aa04-85b8-423b-8338-45073fa49cb5,Namespace:kube-system,Attempt:1,}" Jul 14 21:53:38.862795 containerd[1545]: 2025-07-14 21:53:38.801 [INFO][4754] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" Jul 14 21:53:38.862795 containerd[1545]: 2025-07-14 21:53:38.802 [INFO][4754] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" iface="eth0" netns="/var/run/netns/cni-5b77b95a-4e4b-647a-6527-5359615b4dfb" Jul 14 21:53:38.862795 containerd[1545]: 2025-07-14 21:53:38.803 [INFO][4754] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" iface="eth0" netns="/var/run/netns/cni-5b77b95a-4e4b-647a-6527-5359615b4dfb" Jul 14 21:53:38.862795 containerd[1545]: 2025-07-14 21:53:38.803 [INFO][4754] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" iface="eth0" netns="/var/run/netns/cni-5b77b95a-4e4b-647a-6527-5359615b4dfb" Jul 14 21:53:38.862795 containerd[1545]: 2025-07-14 21:53:38.803 [INFO][4754] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" Jul 14 21:53:38.862795 containerd[1545]: 2025-07-14 21:53:38.803 [INFO][4754] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" Jul 14 21:53:38.862795 containerd[1545]: 2025-07-14 21:53:38.831 [INFO][4799] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" HandleID="k8s-pod-network.97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" Workload="localhost-k8s-goldmane--58fd7646b9--vz2ck-eth0" Jul 14 21:53:38.862795 containerd[1545]: 2025-07-14 21:53:38.831 [INFO][4799] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:53:38.862795 containerd[1545]: 2025-07-14 21:53:38.840 [INFO][4799] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:53:38.862795 containerd[1545]: 2025-07-14 21:53:38.849 [WARNING][4799] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" HandleID="k8s-pod-network.97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" Workload="localhost-k8s-goldmane--58fd7646b9--vz2ck-eth0" Jul 14 21:53:38.862795 containerd[1545]: 2025-07-14 21:53:38.849 [INFO][4799] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" HandleID="k8s-pod-network.97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" Workload="localhost-k8s-goldmane--58fd7646b9--vz2ck-eth0" Jul 14 21:53:38.862795 containerd[1545]: 2025-07-14 21:53:38.854 [INFO][4799] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:53:38.862795 containerd[1545]: 2025-07-14 21:53:38.859 [INFO][4754] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" Jul 14 21:53:38.863896 containerd[1545]: time="2025-07-14T21:53:38.863863187Z" level=info msg="TearDown network for sandbox \"97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a\" successfully" Jul 14 21:53:38.863896 containerd[1545]: time="2025-07-14T21:53:38.863892028Z" level=info msg="StopPodSandbox for \"97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a\" returns successfully" Jul 14 21:53:38.864431 containerd[1545]: time="2025-07-14T21:53:38.864401408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-vz2ck,Uid:5f83ff6d-3a8d-4195-8362-ba2ec00150cb,Namespace:calico-system,Attempt:1,}" Jul 14 21:53:38.868073 systemd[1]: run-netns-cni\x2d5b77b95a\x2d4e4b\x2d647a\x2d6527\x2d5359615b4dfb.mount: Deactivated successfully. Jul 14 21:53:38.924617 kubelet[2635]: E0714 21:53:38.924588 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:38.939030 kubelet[2635]: I0714 21:53:38.938972 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-58cbd4c654-nf4d4" podStartSLOduration=23.323600766 podStartE2EDuration="24.938952508s" podCreationTimestamp="2025-07-14 21:53:14 +0000 UTC" firstStartedPulling="2025-07-14 21:53:37.1160389 +0000 UTC m=+41.477564096" lastFinishedPulling="2025-07-14 21:53:38.731390642 +0000 UTC m=+43.092915838" observedRunningTime="2025-07-14 21:53:38.938419967 +0000 UTC m=+43.299945123" watchObservedRunningTime="2025-07-14 21:53:38.938952508 +0000 UTC m=+43.300477704" Jul 14 21:53:39.011633 systemd-networkd[1232]: cali7973f2a521e: Link UP Jul 14 21:53:39.013340 systemd-networkd[1232]: cali7973f2a521e: Gained carrier Jul 14 21:53:39.026986 containerd[1545]: 2025-07-14 21:53:38.899 [INFO][4831] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 21:53:39.026986 containerd[1545]: 2025-07-14 21:53:38.915 [INFO][4831] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--rvt7n-eth0 coredns-7c65d6cfc9- kube-system 24e0aa04-85b8-423b-8338-45073fa49cb5 1011 0 2025-07-14 21:53:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-rvt7n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7973f2a521e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e98beffee32eaf36a36fd3eb8b220baf9fa22d47ee14a05b6daa76f44b2d2d06" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rvt7n" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rvt7n-" Jul 14 21:53:39.026986 containerd[1545]: 2025-07-14 21:53:38.916 [INFO][4831] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e98beffee32eaf36a36fd3eb8b220baf9fa22d47ee14a05b6daa76f44b2d2d06" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rvt7n" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rvt7n-eth0" Jul 14 21:53:39.026986 containerd[1545]: 2025-07-14 21:53:38.952 [INFO][4860] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e98beffee32eaf36a36fd3eb8b220baf9fa22d47ee14a05b6daa76f44b2d2d06" HandleID="k8s-pod-network.e98beffee32eaf36a36fd3eb8b220baf9fa22d47ee14a05b6daa76f44b2d2d06" Workload="localhost-k8s-coredns--7c65d6cfc9--rvt7n-eth0" Jul 14 21:53:39.026986 containerd[1545]: 2025-07-14 21:53:38.952 [INFO][4860] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e98beffee32eaf36a36fd3eb8b220baf9fa22d47ee14a05b6daa76f44b2d2d06" HandleID="k8s-pod-network.e98beffee32eaf36a36fd3eb8b220baf9fa22d47ee14a05b6daa76f44b2d2d06" Workload="localhost-k8s-coredns--7c65d6cfc9--rvt7n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136e30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-rvt7n", "timestamp":"2025-07-14 21:53:38.952710451 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 21:53:39.026986 containerd[1545]: 2025-07-14 21:53:38.952 [INFO][4860] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:53:39.026986 containerd[1545]: 2025-07-14 21:53:38.952 [INFO][4860] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:53:39.026986 containerd[1545]: 2025-07-14 21:53:38.953 [INFO][4860] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 21:53:39.026986 containerd[1545]: 2025-07-14 21:53:38.969 [INFO][4860] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e98beffee32eaf36a36fd3eb8b220baf9fa22d47ee14a05b6daa76f44b2d2d06" host="localhost" Jul 14 21:53:39.026986 containerd[1545]: 2025-07-14 21:53:38.976 [INFO][4860] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 21:53:39.026986 containerd[1545]: 2025-07-14 21:53:38.982 [INFO][4860] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 21:53:39.026986 containerd[1545]: 2025-07-14 21:53:38.984 [INFO][4860] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 21:53:39.026986 containerd[1545]: 2025-07-14 21:53:38.987 [INFO][4860] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 21:53:39.026986 containerd[1545]: 2025-07-14 21:53:38.987 [INFO][4860] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e98beffee32eaf36a36fd3eb8b220baf9fa22d47ee14a05b6daa76f44b2d2d06" host="localhost" Jul 14 21:53:39.026986 containerd[1545]: 2025-07-14 21:53:38.992 [INFO][4860] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e98beffee32eaf36a36fd3eb8b220baf9fa22d47ee14a05b6daa76f44b2d2d06 Jul 14 21:53:39.026986 containerd[1545]: 2025-07-14 21:53:38.996 [INFO][4860] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e98beffee32eaf36a36fd3eb8b220baf9fa22d47ee14a05b6daa76f44b2d2d06" host="localhost" Jul 14 21:53:39.026986 containerd[1545]: 2025-07-14 21:53:39.003 [INFO][4860] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.e98beffee32eaf36a36fd3eb8b220baf9fa22d47ee14a05b6daa76f44b2d2d06" host="localhost" Jul 14 21:53:39.026986 containerd[1545]: 2025-07-14 21:53:39.003 [INFO][4860] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.e98beffee32eaf36a36fd3eb8b220baf9fa22d47ee14a05b6daa76f44b2d2d06" host="localhost" Jul 14 21:53:39.026986 containerd[1545]: 2025-07-14 21:53:39.003 [INFO][4860] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:53:39.026986 containerd[1545]: 2025-07-14 21:53:39.003 [INFO][4860] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="e98beffee32eaf36a36fd3eb8b220baf9fa22d47ee14a05b6daa76f44b2d2d06" HandleID="k8s-pod-network.e98beffee32eaf36a36fd3eb8b220baf9fa22d47ee14a05b6daa76f44b2d2d06" Workload="localhost-k8s-coredns--7c65d6cfc9--rvt7n-eth0" Jul 14 21:53:39.028369 containerd[1545]: 2025-07-14 21:53:39.007 [INFO][4831] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e98beffee32eaf36a36fd3eb8b220baf9fa22d47ee14a05b6daa76f44b2d2d06" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rvt7n" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rvt7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--rvt7n-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"24e0aa04-85b8-423b-8338-45073fa49cb5", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 53, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-rvt7n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7973f2a521e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:53:39.028369 containerd[1545]: 2025-07-14 21:53:39.007 [INFO][4831] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="e98beffee32eaf36a36fd3eb8b220baf9fa22d47ee14a05b6daa76f44b2d2d06" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rvt7n" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rvt7n-eth0" Jul 14 21:53:39.028369 containerd[1545]: 2025-07-14 21:53:39.007 [INFO][4831] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7973f2a521e ContainerID="e98beffee32eaf36a36fd3eb8b220baf9fa22d47ee14a05b6daa76f44b2d2d06" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rvt7n" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rvt7n-eth0" Jul 14 21:53:39.028369 containerd[1545]: 2025-07-14 21:53:39.014 [INFO][4831] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e98beffee32eaf36a36fd3eb8b220baf9fa22d47ee14a05b6daa76f44b2d2d06" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rvt7n" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rvt7n-eth0" Jul 14 21:53:39.028369 containerd[1545]: 2025-07-14 21:53:39.014 [INFO][4831] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e98beffee32eaf36a36fd3eb8b220baf9fa22d47ee14a05b6daa76f44b2d2d06" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rvt7n" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rvt7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--rvt7n-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"24e0aa04-85b8-423b-8338-45073fa49cb5", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 53, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e98beffee32eaf36a36fd3eb8b220baf9fa22d47ee14a05b6daa76f44b2d2d06", Pod:"coredns-7c65d6cfc9-rvt7n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7973f2a521e", MAC:"b6:52:be:b1:70:c2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:53:39.028369 containerd[1545]: 2025-07-14 21:53:39.024 [INFO][4831] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e98beffee32eaf36a36fd3eb8b220baf9fa22d47ee14a05b6daa76f44b2d2d06" Namespace="kube-system" Pod="coredns-7c65d6cfc9-rvt7n" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--rvt7n-eth0" Jul 14 21:53:39.138781 containerd[1545]: time="2025-07-14T21:53:39.134982114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:53:39.138781 containerd[1545]: time="2025-07-14T21:53:39.138568732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:53:39.138781 containerd[1545]: time="2025-07-14T21:53:39.138586773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:39.138781 containerd[1545]: time="2025-07-14T21:53:39.138726138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:39.139930 systemd-networkd[1232]: cali78f3eab06f3: Link UP Jul 14 21:53:39.140299 systemd-networkd[1232]: cali78f3eab06f3: Gained carrier Jul 14 21:53:39.159372 containerd[1545]: 2025-07-14 21:53:38.903 [INFO][4841] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 21:53:39.159372 containerd[1545]: 2025-07-14 21:53:38.922 [INFO][4841] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--vz2ck-eth0 goldmane-58fd7646b9- calico-system 5f83ff6d-3a8d-4195-8362-ba2ec00150cb 1010 0 2025-07-14 21:53:14 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-vz2ck eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali78f3eab06f3 [] [] }} ContainerID="2ac8dd9c55bcadf3ef7cf1caa0028be6ac213c867c12221b2a624c78b3f4dbff" Namespace="calico-system" Pod="goldmane-58fd7646b9-vz2ck" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--vz2ck-" Jul 14 21:53:39.159372 containerd[1545]: 2025-07-14 21:53:38.922 [INFO][4841] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2ac8dd9c55bcadf3ef7cf1caa0028be6ac213c867c12221b2a624c78b3f4dbff" Namespace="calico-system" Pod="goldmane-58fd7646b9-vz2ck" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--vz2ck-eth0" Jul 14 21:53:39.159372 containerd[1545]: 2025-07-14 21:53:38.971 [INFO][4867] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ac8dd9c55bcadf3ef7cf1caa0028be6ac213c867c12221b2a624c78b3f4dbff" HandleID="k8s-pod-network.2ac8dd9c55bcadf3ef7cf1caa0028be6ac213c867c12221b2a624c78b3f4dbff" Workload="localhost-k8s-goldmane--58fd7646b9--vz2ck-eth0" Jul 14 21:53:39.159372 containerd[1545]: 2025-07-14 21:53:38.972 [INFO][4867] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2ac8dd9c55bcadf3ef7cf1caa0028be6ac213c867c12221b2a624c78b3f4dbff" HandleID="k8s-pod-network.2ac8dd9c55bcadf3ef7cf1caa0028be6ac213c867c12221b2a624c78b3f4dbff" Workload="localhost-k8s-goldmane--58fd7646b9--vz2ck-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c440), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-vz2ck", "timestamp":"2025-07-14 21:53:38.971868446 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 21:53:39.159372 containerd[1545]: 2025-07-14 21:53:38.972 [INFO][4867] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:53:39.159372 containerd[1545]: 2025-07-14 21:53:39.003 [INFO][4867] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:53:39.159372 containerd[1545]: 2025-07-14 21:53:39.003 [INFO][4867] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 21:53:39.159372 containerd[1545]: 2025-07-14 21:53:39.067 [INFO][4867] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2ac8dd9c55bcadf3ef7cf1caa0028be6ac213c867c12221b2a624c78b3f4dbff" host="localhost" Jul 14 21:53:39.159372 containerd[1545]: 2025-07-14 21:53:39.076 [INFO][4867] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 21:53:39.159372 containerd[1545]: 2025-07-14 21:53:39.080 [INFO][4867] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 21:53:39.159372 containerd[1545]: 2025-07-14 21:53:39.082 [INFO][4867] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 21:53:39.159372 containerd[1545]: 2025-07-14 21:53:39.119 [INFO][4867] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 21:53:39.159372 containerd[1545]: 2025-07-14 21:53:39.119 [INFO][4867] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2ac8dd9c55bcadf3ef7cf1caa0028be6ac213c867c12221b2a624c78b3f4dbff" host="localhost" Jul 14 21:53:39.159372 containerd[1545]: 2025-07-14 21:53:39.123 [INFO][4867] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2ac8dd9c55bcadf3ef7cf1caa0028be6ac213c867c12221b2a624c78b3f4dbff Jul 14 21:53:39.159372 containerd[1545]: 2025-07-14 21:53:39.128 [INFO][4867] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2ac8dd9c55bcadf3ef7cf1caa0028be6ac213c867c12221b2a624c78b3f4dbff" host="localhost" Jul 14 21:53:39.159372 containerd[1545]: 2025-07-14 21:53:39.135 [INFO][4867] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.2ac8dd9c55bcadf3ef7cf1caa0028be6ac213c867c12221b2a624c78b3f4dbff" host="localhost" Jul 14 21:53:39.159372 containerd[1545]: 2025-07-14 21:53:39.135 [INFO][4867] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.2ac8dd9c55bcadf3ef7cf1caa0028be6ac213c867c12221b2a624c78b3f4dbff" host="localhost" Jul 14 21:53:39.159372 containerd[1545]: 2025-07-14 21:53:39.135 [INFO][4867] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:53:39.159372 containerd[1545]: 2025-07-14 21:53:39.135 [INFO][4867] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="2ac8dd9c55bcadf3ef7cf1caa0028be6ac213c867c12221b2a624c78b3f4dbff" HandleID="k8s-pod-network.2ac8dd9c55bcadf3ef7cf1caa0028be6ac213c867c12221b2a624c78b3f4dbff" Workload="localhost-k8s-goldmane--58fd7646b9--vz2ck-eth0" Jul 14 21:53:39.159905 containerd[1545]: 2025-07-14 21:53:39.138 [INFO][4841] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2ac8dd9c55bcadf3ef7cf1caa0028be6ac213c867c12221b2a624c78b3f4dbff" Namespace="calico-system" Pod="goldmane-58fd7646b9-vz2ck" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--vz2ck-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--vz2ck-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"5f83ff6d-3a8d-4195-8362-ba2ec00150cb", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 53, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-vz2ck", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali78f3eab06f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:53:39.159905 containerd[1545]: 2025-07-14 21:53:39.138 [INFO][4841] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="2ac8dd9c55bcadf3ef7cf1caa0028be6ac213c867c12221b2a624c78b3f4dbff" Namespace="calico-system" Pod="goldmane-58fd7646b9-vz2ck" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--vz2ck-eth0" Jul 14 21:53:39.159905 containerd[1545]: 2025-07-14 21:53:39.138 [INFO][4841] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali78f3eab06f3 ContainerID="2ac8dd9c55bcadf3ef7cf1caa0028be6ac213c867c12221b2a624c78b3f4dbff" Namespace="calico-system" Pod="goldmane-58fd7646b9-vz2ck" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--vz2ck-eth0" Jul 14 21:53:39.159905 containerd[1545]: 2025-07-14 21:53:39.140 [INFO][4841] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ac8dd9c55bcadf3ef7cf1caa0028be6ac213c867c12221b2a624c78b3f4dbff" Namespace="calico-system" Pod="goldmane-58fd7646b9-vz2ck" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--vz2ck-eth0" Jul 14 21:53:39.159905 containerd[1545]: 2025-07-14 21:53:39.140 [INFO][4841] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2ac8dd9c55bcadf3ef7cf1caa0028be6ac213c867c12221b2a624c78b3f4dbff" Namespace="calico-system" Pod="goldmane-58fd7646b9-vz2ck" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--vz2ck-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--vz2ck-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"5f83ff6d-3a8d-4195-8362-ba2ec00150cb", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 53, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2ac8dd9c55bcadf3ef7cf1caa0028be6ac213c867c12221b2a624c78b3f4dbff", Pod:"goldmane-58fd7646b9-vz2ck", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali78f3eab06f3", MAC:"fa:ad:83:b3:0b:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:53:39.159905 containerd[1545]: 2025-07-14 21:53:39.153 [INFO][4841] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2ac8dd9c55bcadf3ef7cf1caa0028be6ac213c867c12221b2a624c78b3f4dbff" Namespace="calico-system" Pod="goldmane-58fd7646b9-vz2ck" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--vz2ck-eth0" Jul 14 21:53:39.168990 systemd-resolved[1437]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:53:39.176463 containerd[1545]: time="2025-07-14T21:53:39.176253344Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:53:39.176463 containerd[1545]: time="2025-07-14T21:53:39.176376228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:53:39.176463 containerd[1545]: time="2025-07-14T21:53:39.176395349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:39.176846 containerd[1545]: time="2025-07-14T21:53:39.176776004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:39.208262 containerd[1545]: time="2025-07-14T21:53:39.208225575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rvt7n,Uid:24e0aa04-85b8-423b-8338-45073fa49cb5,Namespace:kube-system,Attempt:1,} returns sandbox id \"e98beffee32eaf36a36fd3eb8b220baf9fa22d47ee14a05b6daa76f44b2d2d06\"" Jul 14 21:53:39.210535 kubelet[2635]: E0714 21:53:39.209111 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:39.214253 containerd[1545]: time="2025-07-14T21:53:39.213329411Z" level=info msg="CreateContainer within sandbox \"e98beffee32eaf36a36fd3eb8b220baf9fa22d47ee14a05b6daa76f44b2d2d06\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 21:53:39.221065 systemd-resolved[1437]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:53:39.224363 containerd[1545]: time="2025-07-14T21:53:39.224328075Z" level=info msg="CreateContainer within sandbox \"e98beffee32eaf36a36fd3eb8b220baf9fa22d47ee14a05b6daa76f44b2d2d06\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2fae060df53ac786ef6138213dc72b4c4013b7d4a6d8cb918268a18ad0223176\"" Jul 14 21:53:39.226497 containerd[1545]: time="2025-07-14T21:53:39.225495920Z" level=info msg="StartContainer for \"2fae060df53ac786ef6138213dc72b4c4013b7d4a6d8cb918268a18ad0223176\"" Jul 14 21:53:39.247876 containerd[1545]: time="2025-07-14T21:53:39.247842060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-vz2ck,Uid:5f83ff6d-3a8d-4195-8362-ba2ec00150cb,Namespace:calico-system,Attempt:1,} returns sandbox id \"2ac8dd9c55bcadf3ef7cf1caa0028be6ac213c867c12221b2a624c78b3f4dbff\"" Jul 14 21:53:39.251188 containerd[1545]: time="2025-07-14T21:53:39.249804856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 14 21:53:39.272846 containerd[1545]: time="2025-07-14T21:53:39.272757860Z" level=info msg="StartContainer for \"2fae060df53ac786ef6138213dc72b4c4013b7d4a6d8cb918268a18ad0223176\" returns successfully" Jul 14 21:53:39.951396 kubelet[2635]: E0714 21:53:39.951337 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:39.954342 kubelet[2635]: I0714 21:53:39.954290 2635 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 21:53:39.955408 kubelet[2635]: E0714 21:53:39.955352 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:39.962242 kubelet[2635]: I0714 21:53:39.962189 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-rvt7n" podStartSLOduration=38.962175288 podStartE2EDuration="38.962175288s" podCreationTimestamp="2025-07-14 21:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:53:39.961324135 +0000 UTC m=+44.322849331" watchObservedRunningTime="2025-07-14 21:53:39.962175288 +0000 UTC m=+44.323700484" Jul 14 21:53:40.048611 systemd[1]: Started sshd@8-10.0.0.64:22-10.0.0.1:40746.service - OpenSSH per-connection server daemon (10.0.0.1:40746). Jul 14 21:53:40.086634 sshd[5045]: Accepted publickey for core from 10.0.0.1 port 40746 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:53:40.088036 sshd[5045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:53:40.094583 systemd-logind[1524]: New session 9 of user core. Jul 14 21:53:40.106289 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 14 21:53:40.418455 sshd[5045]: pam_unix(sshd:session): session closed for user core Jul 14 21:53:40.421082 systemd[1]: sshd@8-10.0.0.64:22-10.0.0.1:40746.service: Deactivated successfully. Jul 14 21:53:40.424215 systemd-logind[1524]: Session 9 logged out. Waiting for processes to exit. Jul 14 21:53:40.424732 systemd[1]: session-9.scope: Deactivated successfully. Jul 14 21:53:40.426422 systemd-logind[1524]: Removed session 9. Jul 14 21:53:40.445183 systemd-networkd[1232]: cali78f3eab06f3: Gained IPv6LL Jul 14 21:53:40.618847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1459241476.mount: Deactivated successfully. Jul 14 21:53:40.750146 containerd[1545]: time="2025-07-14T21:53:40.748758725Z" level=info msg="StopPodSandbox for \"de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50\"" Jul 14 21:53:40.750146 containerd[1545]: time="2025-07-14T21:53:40.749045016Z" level=info msg="StopPodSandbox for \"cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf\"" Jul 14 21:53:40.755721 containerd[1545]: time="2025-07-14T21:53:40.755690186Z" level=info msg="StopPodSandbox for \"b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d\"" Jul 14 21:53:40.767031 systemd-networkd[1232]: cali7973f2a521e: Gained IPv6LL Jul 14 21:53:40.921051 containerd[1545]: 2025-07-14 21:53:40.859 [INFO][5100] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" Jul 14 21:53:40.921051 containerd[1545]: 2025-07-14 21:53:40.859 [INFO][5100] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" iface="eth0" netns="/var/run/netns/cni-3e6df43f-f7b2-b4a2-96de-67f53fd0656a" Jul 14 21:53:40.921051 containerd[1545]: 2025-07-14 21:53:40.860 [INFO][5100] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" iface="eth0" netns="/var/run/netns/cni-3e6df43f-f7b2-b4a2-96de-67f53fd0656a" Jul 14 21:53:40.921051 containerd[1545]: 2025-07-14 21:53:40.860 [INFO][5100] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" iface="eth0" netns="/var/run/netns/cni-3e6df43f-f7b2-b4a2-96de-67f53fd0656a" Jul 14 21:53:40.921051 containerd[1545]: 2025-07-14 21:53:40.860 [INFO][5100] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" Jul 14 21:53:40.921051 containerd[1545]: 2025-07-14 21:53:40.860 [INFO][5100] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" Jul 14 21:53:40.921051 containerd[1545]: 2025-07-14 21:53:40.893 [INFO][5155] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" HandleID="k8s-pod-network.cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" Workload="localhost-k8s-calico--apiserver--556f958c76--grqpf-eth0" Jul 14 21:53:40.921051 containerd[1545]: 2025-07-14 21:53:40.893 [INFO][5155] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:53:40.921051 containerd[1545]: 2025-07-14 21:53:40.893 [INFO][5155] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:53:40.921051 containerd[1545]: 2025-07-14 21:53:40.905 [WARNING][5155] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" HandleID="k8s-pod-network.cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" Workload="localhost-k8s-calico--apiserver--556f958c76--grqpf-eth0" Jul 14 21:53:40.921051 containerd[1545]: 2025-07-14 21:53:40.905 [INFO][5155] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" HandleID="k8s-pod-network.cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" Workload="localhost-k8s-calico--apiserver--556f958c76--grqpf-eth0" Jul 14 21:53:40.921051 containerd[1545]: 2025-07-14 21:53:40.907 [INFO][5155] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:53:40.921051 containerd[1545]: 2025-07-14 21:53:40.910 [INFO][5100] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" Jul 14 21:53:40.922783 containerd[1545]: time="2025-07-14T21:53:40.922553386Z" level=info msg="TearDown network for sandbox \"cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf\" successfully" Jul 14 21:53:40.922783 containerd[1545]: time="2025-07-14T21:53:40.922593348Z" level=info msg="StopPodSandbox for \"cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf\" returns successfully" Jul 14 21:53:40.924446 containerd[1545]: time="2025-07-14T21:53:40.924364855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556f958c76-grqpf,Uid:ad88b19a-39ed-43ef-8d34-f24a9a9dd91a,Namespace:calico-apiserver,Attempt:1,}" Jul 14 21:53:40.924420 systemd[1]: run-netns-cni\x2d3e6df43f\x2df7b2\x2db4a2\x2d96de\x2d67f53fd0656a.mount: Deactivated successfully. Jul 14 21:53:40.941874 containerd[1545]: 2025-07-14 21:53:40.852 [INFO][5098] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" Jul 14 21:53:40.941874 containerd[1545]: 2025-07-14 21:53:40.852 [INFO][5098] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" iface="eth0" netns="/var/run/netns/cni-cca15c07-c86d-7abd-5252-0f5f73097428" Jul 14 21:53:40.941874 containerd[1545]: 2025-07-14 21:53:40.853 [INFO][5098] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" iface="eth0" netns="/var/run/netns/cni-cca15c07-c86d-7abd-5252-0f5f73097428" Jul 14 21:53:40.941874 containerd[1545]: 2025-07-14 21:53:40.853 [INFO][5098] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" iface="eth0" netns="/var/run/netns/cni-cca15c07-c86d-7abd-5252-0f5f73097428" Jul 14 21:53:40.941874 containerd[1545]: 2025-07-14 21:53:40.853 [INFO][5098] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" Jul 14 21:53:40.941874 containerd[1545]: 2025-07-14 21:53:40.853 [INFO][5098] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" Jul 14 21:53:40.941874 containerd[1545]: 2025-07-14 21:53:40.913 [INFO][5153] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" HandleID="k8s-pod-network.de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" Workload="localhost-k8s-csi--node--driver--rxt6z-eth0" Jul 14 21:53:40.941874 containerd[1545]: 2025-07-14 21:53:40.913 [INFO][5153] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:53:40.941874 containerd[1545]: 2025-07-14 21:53:40.913 [INFO][5153] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:53:40.941874 containerd[1545]: 2025-07-14 21:53:40.932 [WARNING][5153] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" HandleID="k8s-pod-network.de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" Workload="localhost-k8s-csi--node--driver--rxt6z-eth0" Jul 14 21:53:40.941874 containerd[1545]: 2025-07-14 21:53:40.932 [INFO][5153] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" HandleID="k8s-pod-network.de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" Workload="localhost-k8s-csi--node--driver--rxt6z-eth0" Jul 14 21:53:40.941874 containerd[1545]: 2025-07-14 21:53:40.935 [INFO][5153] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:53:40.941874 containerd[1545]: 2025-07-14 21:53:40.939 [INFO][5098] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" Jul 14 21:53:40.942393 containerd[1545]: time="2025-07-14T21:53:40.942032520Z" level=info msg="TearDown network for sandbox \"de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50\" successfully" Jul 14 21:53:40.942393 containerd[1545]: time="2025-07-14T21:53:40.942059281Z" level=info msg="StopPodSandbox for \"de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50\" returns successfully" Jul 14 21:53:40.944275 containerd[1545]: time="2025-07-14T21:53:40.944238163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rxt6z,Uid:00f8cdae-e32e-4020-9c5f-9b5051044975,Namespace:calico-system,Attempt:1,}" Jul 14 21:53:40.946390 systemd[1]: run-netns-cni\x2dcca15c07\x2dc86d\x2d7abd\x2d5252\x2d0f5f73097428.mount: Deactivated successfully. Jul 14 21:53:40.957090 kubelet[2635]: E0714 21:53:40.957062 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:40.963807 containerd[1545]: 2025-07-14 21:53:40.839 [INFO][5114] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" Jul 14 21:53:40.963807 containerd[1545]: 2025-07-14 21:53:40.839 [INFO][5114] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" iface="eth0" netns="/var/run/netns/cni-12d59e25-0191-ae20-7647-43893ce96d74" Jul 14 21:53:40.963807 containerd[1545]: 2025-07-14 21:53:40.840 [INFO][5114] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" iface="eth0" netns="/var/run/netns/cni-12d59e25-0191-ae20-7647-43893ce96d74" Jul 14 21:53:40.963807 containerd[1545]: 2025-07-14 21:53:40.840 [INFO][5114] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" iface="eth0" netns="/var/run/netns/cni-12d59e25-0191-ae20-7647-43893ce96d74" Jul 14 21:53:40.963807 containerd[1545]: 2025-07-14 21:53:40.841 [INFO][5114] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" Jul 14 21:53:40.963807 containerd[1545]: 2025-07-14 21:53:40.841 [INFO][5114] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" Jul 14 21:53:40.963807 containerd[1545]: 2025-07-14 21:53:40.918 [INFO][5142] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" HandleID="k8s-pod-network.b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" Workload="localhost-k8s-calico--apiserver--556f958c76--mmvvm-eth0" Jul 14 21:53:40.963807 containerd[1545]: 2025-07-14 21:53:40.918 [INFO][5142] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:53:40.963807 containerd[1545]: 2025-07-14 21:53:40.935 [INFO][5142] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:53:40.963807 containerd[1545]: 2025-07-14 21:53:40.948 [WARNING][5142] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" HandleID="k8s-pod-network.b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" Workload="localhost-k8s-calico--apiserver--556f958c76--mmvvm-eth0" Jul 14 21:53:40.963807 containerd[1545]: 2025-07-14 21:53:40.948 [INFO][5142] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" HandleID="k8s-pod-network.b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" Workload="localhost-k8s-calico--apiserver--556f958c76--mmvvm-eth0" Jul 14 21:53:40.963807 containerd[1545]: 2025-07-14 21:53:40.951 [INFO][5142] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:53:40.963807 containerd[1545]: 2025-07-14 21:53:40.961 [INFO][5114] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" Jul 14 21:53:40.964467 containerd[1545]: time="2025-07-14T21:53:40.964420802Z" level=info msg="TearDown network for sandbox \"b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d\" successfully" Jul 14 21:53:40.964504 containerd[1545]: time="2025-07-14T21:53:40.964467724Z" level=info msg="StopPodSandbox for \"b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d\" returns successfully" Jul 14 21:53:40.965099 containerd[1545]: time="2025-07-14T21:53:40.964994384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556f958c76-mmvvm,Uid:77d9f340-8b20-4c7c-bc84-1d529d731237,Namespace:calico-apiserver,Attempt:1,}" Jul 14 21:53:40.967605 systemd[1]: run-netns-cni\x2d12d59e25\x2d0191\x2dae20\x2d7647\x2d43893ce96d74.mount: Deactivated successfully. Jul 14 21:53:41.165236 systemd-networkd[1232]: cali890ceb509b4: Link UP Jul 14 21:53:41.165575 systemd-networkd[1232]: cali890ceb509b4: Gained carrier Jul 14 21:53:41.181358 containerd[1545]: 2025-07-14 21:53:41.078 [INFO][5200] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 21:53:41.181358 containerd[1545]: 2025-07-14 21:53:41.095 [INFO][5200] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--rxt6z-eth0 csi-node-driver- calico-system 00f8cdae-e32e-4020-9c5f-9b5051044975 1047 0 2025-07-14 21:53:14 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-rxt6z eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali890ceb509b4 [] [] }} ContainerID="7e6280bcec3bbb8be12cab19a56a7f96db12388e5ddcef56ead54421f1dd3068" Namespace="calico-system" Pod="csi-node-driver-rxt6z" WorkloadEndpoint="localhost-k8s-csi--node--driver--rxt6z-" Jul 14 21:53:41.181358 containerd[1545]: 2025-07-14 21:53:41.095 [INFO][5200] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7e6280bcec3bbb8be12cab19a56a7f96db12388e5ddcef56ead54421f1dd3068" Namespace="calico-system" Pod="csi-node-driver-rxt6z" WorkloadEndpoint="localhost-k8s-csi--node--driver--rxt6z-eth0" Jul 14 21:53:41.181358 containerd[1545]: 2025-07-14 21:53:41.123 [INFO][5227] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e6280bcec3bbb8be12cab19a56a7f96db12388e5ddcef56ead54421f1dd3068" HandleID="k8s-pod-network.7e6280bcec3bbb8be12cab19a56a7f96db12388e5ddcef56ead54421f1dd3068" Workload="localhost-k8s-csi--node--driver--rxt6z-eth0" Jul 14 21:53:41.181358 containerd[1545]: 2025-07-14 21:53:41.124 [INFO][5227] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7e6280bcec3bbb8be12cab19a56a7f96db12388e5ddcef56ead54421f1dd3068" HandleID="k8s-pod-network.7e6280bcec3bbb8be12cab19a56a7f96db12388e5ddcef56ead54421f1dd3068" Workload="localhost-k8s-csi--node--driver--rxt6z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c32a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-rxt6z", "timestamp":"2025-07-14 21:53:41.123962466 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 21:53:41.181358 containerd[1545]: 2025-07-14 21:53:41.124 [INFO][5227] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:53:41.181358 containerd[1545]: 2025-07-14 21:53:41.124 [INFO][5227] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:53:41.181358 containerd[1545]: 2025-07-14 21:53:41.124 [INFO][5227] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 21:53:41.181358 containerd[1545]: 2025-07-14 21:53:41.134 [INFO][5227] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7e6280bcec3bbb8be12cab19a56a7f96db12388e5ddcef56ead54421f1dd3068" host="localhost" Jul 14 21:53:41.181358 containerd[1545]: 2025-07-14 21:53:41.142 [INFO][5227] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 21:53:41.181358 containerd[1545]: 2025-07-14 21:53:41.146 [INFO][5227] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 21:53:41.181358 containerd[1545]: 2025-07-14 21:53:41.148 [INFO][5227] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 21:53:41.181358 containerd[1545]: 2025-07-14 21:53:41.151 [INFO][5227] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 21:53:41.181358 containerd[1545]: 2025-07-14 21:53:41.151 [INFO][5227] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7e6280bcec3bbb8be12cab19a56a7f96db12388e5ddcef56ead54421f1dd3068" host="localhost" Jul 14 21:53:41.181358 containerd[1545]: 2025-07-14 21:53:41.152 [INFO][5227] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7e6280bcec3bbb8be12cab19a56a7f96db12388e5ddcef56ead54421f1dd3068 Jul 14 21:53:41.181358 containerd[1545]: 2025-07-14 21:53:41.155 [INFO][5227] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7e6280bcec3bbb8be12cab19a56a7f96db12388e5ddcef56ead54421f1dd3068" host="localhost" Jul 14 21:53:41.181358 containerd[1545]: 2025-07-14 21:53:41.161 [INFO][5227] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.7e6280bcec3bbb8be12cab19a56a7f96db12388e5ddcef56ead54421f1dd3068" host="localhost" Jul 14 21:53:41.181358 containerd[1545]: 2025-07-14 21:53:41.161 [INFO][5227] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.7e6280bcec3bbb8be12cab19a56a7f96db12388e5ddcef56ead54421f1dd3068" host="localhost" Jul 14 21:53:41.181358 containerd[1545]: 2025-07-14 21:53:41.161 [INFO][5227] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:53:41.181358 containerd[1545]: 2025-07-14 21:53:41.161 [INFO][5227] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="7e6280bcec3bbb8be12cab19a56a7f96db12388e5ddcef56ead54421f1dd3068" HandleID="k8s-pod-network.7e6280bcec3bbb8be12cab19a56a7f96db12388e5ddcef56ead54421f1dd3068" Workload="localhost-k8s-csi--node--driver--rxt6z-eth0" Jul 14 21:53:41.181894 containerd[1545]: 2025-07-14 21:53:41.163 [INFO][5200] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7e6280bcec3bbb8be12cab19a56a7f96db12388e5ddcef56ead54421f1dd3068" Namespace="calico-system" Pod="csi-node-driver-rxt6z" WorkloadEndpoint="localhost-k8s-csi--node--driver--rxt6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rxt6z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"00f8cdae-e32e-4020-9c5f-9b5051044975", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 53, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-rxt6z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali890ceb509b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:53:41.181894 containerd[1545]: 2025-07-14 21:53:41.163 [INFO][5200] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="7e6280bcec3bbb8be12cab19a56a7f96db12388e5ddcef56ead54421f1dd3068" Namespace="calico-system" Pod="csi-node-driver-rxt6z" WorkloadEndpoint="localhost-k8s-csi--node--driver--rxt6z-eth0" Jul 14 21:53:41.181894 containerd[1545]: 2025-07-14 21:53:41.163 [INFO][5200] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali890ceb509b4 ContainerID="7e6280bcec3bbb8be12cab19a56a7f96db12388e5ddcef56ead54421f1dd3068" Namespace="calico-system" Pod="csi-node-driver-rxt6z" WorkloadEndpoint="localhost-k8s-csi--node--driver--rxt6z-eth0" Jul 14 21:53:41.181894 containerd[1545]: 2025-07-14 21:53:41.165 [INFO][5200] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e6280bcec3bbb8be12cab19a56a7f96db12388e5ddcef56ead54421f1dd3068" Namespace="calico-system" Pod="csi-node-driver-rxt6z" WorkloadEndpoint="localhost-k8s-csi--node--driver--rxt6z-eth0" Jul 14 21:53:41.181894 containerd[1545]: 2025-07-14 21:53:41.165 [INFO][5200] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7e6280bcec3bbb8be12cab19a56a7f96db12388e5ddcef56ead54421f1dd3068" Namespace="calico-system" Pod="csi-node-driver-rxt6z" WorkloadEndpoint="localhost-k8s-csi--node--driver--rxt6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rxt6z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"00f8cdae-e32e-4020-9c5f-9b5051044975", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 53, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e6280bcec3bbb8be12cab19a56a7f96db12388e5ddcef56ead54421f1dd3068", Pod:"csi-node-driver-rxt6z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali890ceb509b4", MAC:"62:40:95:9a:3f:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:53:41.181894 containerd[1545]: 2025-07-14 21:53:41.178 [INFO][5200] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7e6280bcec3bbb8be12cab19a56a7f96db12388e5ddcef56ead54421f1dd3068" Namespace="calico-system" Pod="csi-node-driver-rxt6z" WorkloadEndpoint="localhost-k8s-csi--node--driver--rxt6z-eth0" Jul 14 21:53:41.203595 containerd[1545]: time="2025-07-14T21:53:41.203439633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:53:41.203595 containerd[1545]: time="2025-07-14T21:53:41.203488034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:53:41.203595 containerd[1545]: time="2025-07-14T21:53:41.203533516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:41.204001 containerd[1545]: time="2025-07-14T21:53:41.203950331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:41.219869 containerd[1545]: time="2025-07-14T21:53:41.219820956Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:41.220249 containerd[1545]: time="2025-07-14T21:53:41.220215050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 14 21:53:41.221448 containerd[1545]: time="2025-07-14T21:53:41.221345692Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:41.225033 containerd[1545]: time="2025-07-14T21:53:41.223791742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:41.225033 containerd[1545]: time="2025-07-14T21:53:41.224393124Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 1.974552187s" Jul 14 21:53:41.225033 containerd[1545]: time="2025-07-14T21:53:41.224426125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 14 21:53:41.233145 containerd[1545]: time="2025-07-14T21:53:41.233098605Z" level=info msg="CreateContainer within sandbox \"2ac8dd9c55bcadf3ef7cf1caa0028be6ac213c867c12221b2a624c78b3f4dbff\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 14 21:53:41.238648 systemd-resolved[1437]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:53:41.248859 containerd[1545]: time="2025-07-14T21:53:41.248821024Z" level=info msg="CreateContainer within sandbox \"2ac8dd9c55bcadf3ef7cf1caa0028be6ac213c867c12221b2a624c78b3f4dbff\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"1b5c27a7a9d8abbc29fabbe8df9ceaa5fb33cc3638eae5bd5a304ed01a0cdeac\"" Jul 14 21:53:41.249469 containerd[1545]: time="2025-07-14T21:53:41.249401525Z" level=info msg="StartContainer for \"1b5c27a7a9d8abbc29fabbe8df9ceaa5fb33cc3638eae5bd5a304ed01a0cdeac\"" Jul 14 21:53:41.254102 containerd[1545]: time="2025-07-14T21:53:41.254067697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rxt6z,Uid:00f8cdae-e32e-4020-9c5f-9b5051044975,Namespace:calico-system,Attempt:1,} returns sandbox id \"7e6280bcec3bbb8be12cab19a56a7f96db12388e5ddcef56ead54421f1dd3068\"" Jul 14 21:53:41.255740 containerd[1545]: time="2025-07-14T21:53:41.255719118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 14 21:53:41.279155 systemd-networkd[1232]: cali01930bf1fa1: Link UP Jul 14 21:53:41.281975 systemd-networkd[1232]: cali01930bf1fa1: Gained carrier Jul 14 21:53:41.294684 containerd[1545]: 2025-07-14 21:53:41.084 [INFO][5178] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 21:53:41.294684 containerd[1545]: 2025-07-14 21:53:41.096 [INFO][5178] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--556f958c76--grqpf-eth0 calico-apiserver-556f958c76- calico-apiserver ad88b19a-39ed-43ef-8d34-f24a9a9dd91a 1048 0 2025-07-14 21:53:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:556f958c76 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-556f958c76-grqpf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali01930bf1fa1 [] [] }} ContainerID="8da0e5a8d1c866cbfd347dbf7588b0e52ef56c308f266850d4752730a6c58eab" Namespace="calico-apiserver" Pod="calico-apiserver-556f958c76-grqpf" WorkloadEndpoint="localhost-k8s-calico--apiserver--556f958c76--grqpf-" Jul 14 21:53:41.294684 containerd[1545]: 2025-07-14 21:53:41.096 [INFO][5178] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8da0e5a8d1c866cbfd347dbf7588b0e52ef56c308f266850d4752730a6c58eab" Namespace="calico-apiserver" Pod="calico-apiserver-556f958c76-grqpf" WorkloadEndpoint="localhost-k8s-calico--apiserver--556f958c76--grqpf-eth0" Jul 14 21:53:41.294684 containerd[1545]: 2025-07-14 21:53:41.127 [INFO][5224] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8da0e5a8d1c866cbfd347dbf7588b0e52ef56c308f266850d4752730a6c58eab" HandleID="k8s-pod-network.8da0e5a8d1c866cbfd347dbf7588b0e52ef56c308f266850d4752730a6c58eab" Workload="localhost-k8s-calico--apiserver--556f958c76--grqpf-eth0" Jul 14 21:53:41.294684 containerd[1545]: 2025-07-14 21:53:41.128 [INFO][5224] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8da0e5a8d1c866cbfd347dbf7588b0e52ef56c308f266850d4752730a6c58eab" HandleID="k8s-pod-network.8da0e5a8d1c866cbfd347dbf7588b0e52ef56c308f266850d4752730a6c58eab" Workload="localhost-k8s-calico--apiserver--556f958c76--grqpf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c560), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-556f958c76-grqpf", "timestamp":"2025-07-14 21:53:41.127889731 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 21:53:41.294684 containerd[1545]: 2025-07-14 21:53:41.129 [INFO][5224] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:53:41.294684 containerd[1545]: 2025-07-14 21:53:41.161 [INFO][5224] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:53:41.294684 containerd[1545]: 2025-07-14 21:53:41.161 [INFO][5224] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 21:53:41.294684 containerd[1545]: 2025-07-14 21:53:41.236 [INFO][5224] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8da0e5a8d1c866cbfd347dbf7588b0e52ef56c308f266850d4752730a6c58eab" host="localhost" Jul 14 21:53:41.294684 containerd[1545]: 2025-07-14 21:53:41.242 [INFO][5224] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 21:53:41.294684 containerd[1545]: 2025-07-14 21:53:41.248 [INFO][5224] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 21:53:41.294684 containerd[1545]: 2025-07-14 21:53:41.253 [INFO][5224] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 21:53:41.294684 containerd[1545]: 2025-07-14 21:53:41.257 [INFO][5224] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 21:53:41.294684 containerd[1545]: 2025-07-14 21:53:41.257 [INFO][5224] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8da0e5a8d1c866cbfd347dbf7588b0e52ef56c308f266850d4752730a6c58eab" host="localhost" Jul 14 21:53:41.294684 containerd[1545]: 2025-07-14 21:53:41.259 [INFO][5224] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8da0e5a8d1c866cbfd347dbf7588b0e52ef56c308f266850d4752730a6c58eab Jul 14 21:53:41.294684 containerd[1545]: 2025-07-14 21:53:41.264 [INFO][5224] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8da0e5a8d1c866cbfd347dbf7588b0e52ef56c308f266850d4752730a6c58eab" host="localhost" Jul 14 21:53:41.294684 containerd[1545]: 2025-07-14 21:53:41.270 [INFO][5224] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.8da0e5a8d1c866cbfd347dbf7588b0e52ef56c308f266850d4752730a6c58eab" host="localhost" Jul 14 21:53:41.294684 containerd[1545]: 2025-07-14 21:53:41.270 [INFO][5224] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.8da0e5a8d1c866cbfd347dbf7588b0e52ef56c308f266850d4752730a6c58eab" host="localhost" Jul 14 21:53:41.294684 containerd[1545]: 2025-07-14 21:53:41.270 [INFO][5224] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:53:41.294684 containerd[1545]: 2025-07-14 21:53:41.270 [INFO][5224] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="8da0e5a8d1c866cbfd347dbf7588b0e52ef56c308f266850d4752730a6c58eab" HandleID="k8s-pod-network.8da0e5a8d1c866cbfd347dbf7588b0e52ef56c308f266850d4752730a6c58eab" Workload="localhost-k8s-calico--apiserver--556f958c76--grqpf-eth0" Jul 14 21:53:41.295397 containerd[1545]: 2025-07-14 21:53:41.275 [INFO][5178] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8da0e5a8d1c866cbfd347dbf7588b0e52ef56c308f266850d4752730a6c58eab" Namespace="calico-apiserver" Pod="calico-apiserver-556f958c76-grqpf" WorkloadEndpoint="localhost-k8s-calico--apiserver--556f958c76--grqpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--556f958c76--grqpf-eth0", GenerateName:"calico-apiserver-556f958c76-", Namespace:"calico-apiserver", SelfLink:"", UID:"ad88b19a-39ed-43ef-8d34-f24a9a9dd91a", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 53, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"556f958c76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-556f958c76-grqpf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali01930bf1fa1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:53:41.295397 containerd[1545]: 2025-07-14 21:53:41.275 [INFO][5178] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="8da0e5a8d1c866cbfd347dbf7588b0e52ef56c308f266850d4752730a6c58eab" Namespace="calico-apiserver" Pod="calico-apiserver-556f958c76-grqpf" WorkloadEndpoint="localhost-k8s-calico--apiserver--556f958c76--grqpf-eth0" Jul 14 21:53:41.295397 containerd[1545]: 2025-07-14 21:53:41.275 [INFO][5178] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali01930bf1fa1 ContainerID="8da0e5a8d1c866cbfd347dbf7588b0e52ef56c308f266850d4752730a6c58eab" Namespace="calico-apiserver" Pod="calico-apiserver-556f958c76-grqpf" WorkloadEndpoint="localhost-k8s-calico--apiserver--556f958c76--grqpf-eth0" Jul 14 21:53:41.295397 containerd[1545]: 2025-07-14 21:53:41.280 [INFO][5178] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8da0e5a8d1c866cbfd347dbf7588b0e52ef56c308f266850d4752730a6c58eab" Namespace="calico-apiserver" Pod="calico-apiserver-556f958c76-grqpf" WorkloadEndpoint="localhost-k8s-calico--apiserver--556f958c76--grqpf-eth0" Jul 14 21:53:41.295397 containerd[1545]: 2025-07-14 21:53:41.280 [INFO][5178] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8da0e5a8d1c866cbfd347dbf7588b0e52ef56c308f266850d4752730a6c58eab" Namespace="calico-apiserver" Pod="calico-apiserver-556f958c76-grqpf" WorkloadEndpoint="localhost-k8s-calico--apiserver--556f958c76--grqpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--556f958c76--grqpf-eth0", GenerateName:"calico-apiserver-556f958c76-", Namespace:"calico-apiserver", SelfLink:"", UID:"ad88b19a-39ed-43ef-8d34-f24a9a9dd91a", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 53, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"556f958c76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8da0e5a8d1c866cbfd347dbf7588b0e52ef56c308f266850d4752730a6c58eab", Pod:"calico-apiserver-556f958c76-grqpf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali01930bf1fa1", MAC:"de:83:78:8a:38:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:53:41.295397 containerd[1545]: 2025-07-14 21:53:41.292 [INFO][5178] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8da0e5a8d1c866cbfd347dbf7588b0e52ef56c308f266850d4752730a6c58eab" Namespace="calico-apiserver" Pod="calico-apiserver-556f958c76-grqpf" WorkloadEndpoint="localhost-k8s-calico--apiserver--556f958c76--grqpf-eth0" Jul 14 21:53:41.322629 containerd[1545]: time="2025-07-14T21:53:41.322277888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:53:41.322629 containerd[1545]: time="2025-07-14T21:53:41.322334770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:53:41.322629 containerd[1545]: time="2025-07-14T21:53:41.322345571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:41.322629 containerd[1545]: time="2025-07-14T21:53:41.322438254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:41.337832 containerd[1545]: time="2025-07-14T21:53:41.337793580Z" level=info msg="StartContainer for \"1b5c27a7a9d8abbc29fabbe8df9ceaa5fb33cc3638eae5bd5a304ed01a0cdeac\" returns successfully" Jul 14 21:53:41.361646 systemd-resolved[1437]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:53:41.391124 systemd-networkd[1232]: cali941c34f0999: Link UP Jul 14 21:53:41.391807 systemd-networkd[1232]: cali941c34f0999: Gained carrier Jul 14 21:53:41.409110 containerd[1545]: time="2025-07-14T21:53:41.408703990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556f958c76-grqpf,Uid:ad88b19a-39ed-43ef-8d34-f24a9a9dd91a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8da0e5a8d1c866cbfd347dbf7588b0e52ef56c308f266850d4752730a6c58eab\"" Jul 14 21:53:41.409928 containerd[1545]: 2025-07-14 21:53:41.086 [INFO][5180] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 21:53:41.409928 containerd[1545]: 2025-07-14 21:53:41.104 [INFO][5180] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--556f958c76--mmvvm-eth0 calico-apiserver-556f958c76- calico-apiserver 77d9f340-8b20-4c7c-bc84-1d529d731237 1046 0 2025-07-14 21:53:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:556f958c76 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-556f958c76-mmvvm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali941c34f0999 [] [] }} ContainerID="e6ee987a4e0e66c9aab0187b7ccbb379a676b13d8844e7a7b2fc40e4eac9457a" Namespace="calico-apiserver" Pod="calico-apiserver-556f958c76-mmvvm" WorkloadEndpoint="localhost-k8s-calico--apiserver--556f958c76--mmvvm-" Jul 14 21:53:41.409928 containerd[1545]: 2025-07-14 21:53:41.104 [INFO][5180] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e6ee987a4e0e66c9aab0187b7ccbb379a676b13d8844e7a7b2fc40e4eac9457a" Namespace="calico-apiserver" Pod="calico-apiserver-556f958c76-mmvvm" WorkloadEndpoint="localhost-k8s-calico--apiserver--556f958c76--mmvvm-eth0" Jul 14 21:53:41.409928 containerd[1545]: 2025-07-14 21:53:41.141 [INFO][5238] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6ee987a4e0e66c9aab0187b7ccbb379a676b13d8844e7a7b2fc40e4eac9457a" HandleID="k8s-pod-network.e6ee987a4e0e66c9aab0187b7ccbb379a676b13d8844e7a7b2fc40e4eac9457a" Workload="localhost-k8s-calico--apiserver--556f958c76--mmvvm-eth0" Jul 14 21:53:41.409928 containerd[1545]: 2025-07-14 21:53:41.141 [INFO][5238] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e6ee987a4e0e66c9aab0187b7ccbb379a676b13d8844e7a7b2fc40e4eac9457a" HandleID="k8s-pod-network.e6ee987a4e0e66c9aab0187b7ccbb379a676b13d8844e7a7b2fc40e4eac9457a" Workload="localhost-k8s-calico--apiserver--556f958c76--mmvvm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323490), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-556f958c76-mmvvm", "timestamp":"2025-07-14 21:53:41.141238262 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 21:53:41.409928 containerd[1545]: 2025-07-14 21:53:41.141 [INFO][5238] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:53:41.409928 containerd[1545]: 2025-07-14 21:53:41.271 [INFO][5238] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:53:41.409928 containerd[1545]: 2025-07-14 21:53:41.271 [INFO][5238] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 21:53:41.409928 containerd[1545]: 2025-07-14 21:53:41.336 [INFO][5238] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e6ee987a4e0e66c9aab0187b7ccbb379a676b13d8844e7a7b2fc40e4eac9457a" host="localhost" Jul 14 21:53:41.409928 containerd[1545]: 2025-07-14 21:53:41.346 [INFO][5238] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 21:53:41.409928 containerd[1545]: 2025-07-14 21:53:41.355 [INFO][5238] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 21:53:41.409928 containerd[1545]: 2025-07-14 21:53:41.358 [INFO][5238] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 21:53:41.409928 containerd[1545]: 2025-07-14 21:53:41.363 [INFO][5238] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 21:53:41.409928 containerd[1545]: 2025-07-14 21:53:41.363 [INFO][5238] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e6ee987a4e0e66c9aab0187b7ccbb379a676b13d8844e7a7b2fc40e4eac9457a" host="localhost" Jul 14 21:53:41.409928 containerd[1545]: 2025-07-14 21:53:41.368 [INFO][5238] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e6ee987a4e0e66c9aab0187b7ccbb379a676b13d8844e7a7b2fc40e4eac9457a Jul 14 21:53:41.409928 containerd[1545]: 2025-07-14 21:53:41.372 [INFO][5238] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e6ee987a4e0e66c9aab0187b7ccbb379a676b13d8844e7a7b2fc40e4eac9457a" host="localhost" Jul 14 21:53:41.409928 containerd[1545]: 2025-07-14 21:53:41.378 [INFO][5238] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.e6ee987a4e0e66c9aab0187b7ccbb379a676b13d8844e7a7b2fc40e4eac9457a" host="localhost" Jul 14 21:53:41.409928 containerd[1545]: 2025-07-14 21:53:41.378 [INFO][5238] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.e6ee987a4e0e66c9aab0187b7ccbb379a676b13d8844e7a7b2fc40e4eac9457a" host="localhost" Jul 14 21:53:41.409928 containerd[1545]: 2025-07-14 21:53:41.378 [INFO][5238] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:53:41.409928 containerd[1545]: 2025-07-14 21:53:41.378 [INFO][5238] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="e6ee987a4e0e66c9aab0187b7ccbb379a676b13d8844e7a7b2fc40e4eac9457a" HandleID="k8s-pod-network.e6ee987a4e0e66c9aab0187b7ccbb379a676b13d8844e7a7b2fc40e4eac9457a" Workload="localhost-k8s-calico--apiserver--556f958c76--mmvvm-eth0" Jul 14 21:53:41.410811 containerd[1545]: 2025-07-14 21:53:41.388 [INFO][5180] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e6ee987a4e0e66c9aab0187b7ccbb379a676b13d8844e7a7b2fc40e4eac9457a" Namespace="calico-apiserver" Pod="calico-apiserver-556f958c76-mmvvm" WorkloadEndpoint="localhost-k8s-calico--apiserver--556f958c76--mmvvm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--556f958c76--mmvvm-eth0", GenerateName:"calico-apiserver-556f958c76-", Namespace:"calico-apiserver", SelfLink:"", UID:"77d9f340-8b20-4c7c-bc84-1d529d731237", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 53, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"556f958c76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-556f958c76-mmvvm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali941c34f0999", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:53:41.410811 containerd[1545]: 2025-07-14 21:53:41.388 [INFO][5180] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="e6ee987a4e0e66c9aab0187b7ccbb379a676b13d8844e7a7b2fc40e4eac9457a" Namespace="calico-apiserver" Pod="calico-apiserver-556f958c76-mmvvm" WorkloadEndpoint="localhost-k8s-calico--apiserver--556f958c76--mmvvm-eth0" Jul 14 21:53:41.410811 containerd[1545]: 2025-07-14 21:53:41.388 [INFO][5180] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali941c34f0999 ContainerID="e6ee987a4e0e66c9aab0187b7ccbb379a676b13d8844e7a7b2fc40e4eac9457a" Namespace="calico-apiserver" Pod="calico-apiserver-556f958c76-mmvvm" WorkloadEndpoint="localhost-k8s-calico--apiserver--556f958c76--mmvvm-eth0" Jul 14 21:53:41.410811 containerd[1545]: 2025-07-14 21:53:41.391 [INFO][5180] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6ee987a4e0e66c9aab0187b7ccbb379a676b13d8844e7a7b2fc40e4eac9457a" Namespace="calico-apiserver" Pod="calico-apiserver-556f958c76-mmvvm" WorkloadEndpoint="localhost-k8s-calico--apiserver--556f958c76--mmvvm-eth0" Jul 14 21:53:41.410811 containerd[1545]: 2025-07-14 21:53:41.392 [INFO][5180] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e6ee987a4e0e66c9aab0187b7ccbb379a676b13d8844e7a7b2fc40e4eac9457a" Namespace="calico-apiserver" Pod="calico-apiserver-556f958c76-mmvvm" WorkloadEndpoint="localhost-k8s-calico--apiserver--556f958c76--mmvvm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--556f958c76--mmvvm-eth0", GenerateName:"calico-apiserver-556f958c76-", Namespace:"calico-apiserver", SelfLink:"", UID:"77d9f340-8b20-4c7c-bc84-1d529d731237", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 53, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"556f958c76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e6ee987a4e0e66c9aab0187b7ccbb379a676b13d8844e7a7b2fc40e4eac9457a", Pod:"calico-apiserver-556f958c76-mmvvm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali941c34f0999", MAC:"9e:7d:61:e7:54:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:53:41.410811 containerd[1545]: 2025-07-14 21:53:41.406 [INFO][5180] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e6ee987a4e0e66c9aab0187b7ccbb379a676b13d8844e7a7b2fc40e4eac9457a" Namespace="calico-apiserver" Pod="calico-apiserver-556f958c76-mmvvm" WorkloadEndpoint="localhost-k8s-calico--apiserver--556f958c76--mmvvm-eth0" Jul 14 21:53:41.427373 containerd[1545]: time="2025-07-14T21:53:41.427128189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:53:41.427373 containerd[1545]: time="2025-07-14T21:53:41.427195111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:53:41.427373 containerd[1545]: time="2025-07-14T21:53:41.427210232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:41.427373 containerd[1545]: time="2025-07-14T21:53:41.427295555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:53:41.459744 systemd-resolved[1437]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:53:41.478663 containerd[1545]: time="2025-07-14T21:53:41.478570443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556f958c76-mmvvm,Uid:77d9f340-8b20-4c7c-bc84-1d529d731237,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e6ee987a4e0e66c9aab0187b7ccbb379a676b13d8844e7a7b2fc40e4eac9457a\"" Jul 14 21:53:41.979365 kubelet[2635]: E0714 21:53:41.979049 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:42.299323 containerd[1545]: time="2025-07-14T21:53:42.299206590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:42.299836 containerd[1545]: time="2025-07-14T21:53:42.299800771Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 14 21:53:42.300891 containerd[1545]: time="2025-07-14T21:53:42.300841769Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:42.302898 containerd[1545]: time="2025-07-14T21:53:42.302868842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:42.304082 containerd[1545]: time="2025-07-14T21:53:42.303493185Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.047677383s" Jul 14 21:53:42.304082 containerd[1545]: time="2025-07-14T21:53:42.303522266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 14 21:53:42.304572 containerd[1545]: time="2025-07-14T21:53:42.304543262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 14 21:53:42.305610 containerd[1545]: time="2025-07-14T21:53:42.305580540Z" level=info msg="CreateContainer within sandbox \"7e6280bcec3bbb8be12cab19a56a7f96db12388e5ddcef56ead54421f1dd3068\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 14 21:53:42.320344 containerd[1545]: time="2025-07-14T21:53:42.320311191Z" level=info msg="CreateContainer within sandbox \"7e6280bcec3bbb8be12cab19a56a7f96db12388e5ddcef56ead54421f1dd3068\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ba2b7e96e9cb8cb7f5ec197b445d2acddbcf6e56e32c2286651adc4f75226d02\"" Jul 14 21:53:42.320960 containerd[1545]: time="2025-07-14T21:53:42.320832010Z" level=info msg="StartContainer for \"ba2b7e96e9cb8cb7f5ec197b445d2acddbcf6e56e32c2286651adc4f75226d02\"" Jul 14 21:53:42.370790 containerd[1545]: time="2025-07-14T21:53:42.370731529Z" level=info msg="StartContainer for \"ba2b7e96e9cb8cb7f5ec197b445d2acddbcf6e56e32c2286651adc4f75226d02\" returns successfully" Jul 14 21:53:42.429185 systemd-networkd[1232]: cali890ceb509b4: Gained IPv6LL Jul 14 21:53:42.621172 systemd-networkd[1232]: cali941c34f0999: Gained IPv6LL Jul 14 21:53:42.942110 systemd-networkd[1232]: cali01930bf1fa1: Gained IPv6LL Jul 14 21:53:43.835323 containerd[1545]: time="2025-07-14T21:53:43.835281013Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:43.835903 containerd[1545]: time="2025-07-14T21:53:43.835868953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 14 21:53:43.837903 containerd[1545]: time="2025-07-14T21:53:43.837871104Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:43.840004 containerd[1545]: time="2025-07-14T21:53:43.839953138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:43.841062 containerd[1545]: time="2025-07-14T21:53:43.840881210Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.536307827s" Jul 14 21:53:43.841062 containerd[1545]: time="2025-07-14T21:53:43.840917292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 14 21:53:43.841835 containerd[1545]: time="2025-07-14T21:53:43.841801083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 14 21:53:43.842795 containerd[1545]: time="2025-07-14T21:53:43.842765317Z" level=info msg="CreateContainer within sandbox \"8da0e5a8d1c866cbfd347dbf7588b0e52ef56c308f266850d4752730a6c58eab\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 14 21:53:43.852337 containerd[1545]: time="2025-07-14T21:53:43.852277893Z" level=info msg="CreateContainer within sandbox \"8da0e5a8d1c866cbfd347dbf7588b0e52ef56c308f266850d4752730a6c58eab\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6f598b448fd90b0b109d91eb3efe46d8c791b68192ad8800a8b5ca1f48329d14\"" Jul 14 21:53:43.855341 containerd[1545]: time="2025-07-14T21:53:43.855313080Z" level=info msg="StartContainer for \"6f598b448fd90b0b109d91eb3efe46d8c791b68192ad8800a8b5ca1f48329d14\"" Jul 14 21:53:43.917241 containerd[1545]: time="2025-07-14T21:53:43.917195787Z" level=info msg="StartContainer for \"6f598b448fd90b0b109d91eb3efe46d8c791b68192ad8800a8b5ca1f48329d14\" returns successfully" Jul 14 21:53:43.999822 kubelet[2635]: I0714 21:53:43.999082 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-vz2ck" podStartSLOduration=28.020942561 podStartE2EDuration="29.99906436s" podCreationTimestamp="2025-07-14 21:53:14 +0000 UTC" firstStartedPulling="2025-07-14 21:53:39.249373319 +0000 UTC m=+43.610898515" lastFinishedPulling="2025-07-14 21:53:41.227495118 +0000 UTC m=+45.589020314" observedRunningTime="2025-07-14 21:53:41.99100199 +0000 UTC m=+46.352527186" watchObservedRunningTime="2025-07-14 21:53:43.99906436 +0000 UTC m=+48.360589556" Jul 14 21:53:44.001657 kubelet[2635]: I0714 21:53:44.000227 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-556f958c76-grqpf" podStartSLOduration=31.569679002 podStartE2EDuration="34.000213281s" podCreationTimestamp="2025-07-14 21:53:10 +0000 UTC" firstStartedPulling="2025-07-14 21:53:41.411117319 +0000 UTC m=+45.772642515" lastFinishedPulling="2025-07-14 21:53:43.841651598 +0000 UTC m=+48.203176794" observedRunningTime="2025-07-14 21:53:43.99793028 +0000 UTC m=+48.359455476" watchObservedRunningTime="2025-07-14 21:53:44.000213281 +0000 UTC m=+48.361738517" Jul 14 21:53:44.023291 kubelet[2635]: I0714 21:53:44.023248 2635 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 21:53:44.023598 kubelet[2635]: E0714 21:53:44.023574 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:44.096142 containerd[1545]: time="2025-07-14T21:53:44.096004842Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:44.096894 containerd[1545]: time="2025-07-14T21:53:44.096857551Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 14 21:53:44.098999 containerd[1545]: time="2025-07-14T21:53:44.098956104Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 257.1201ms" Jul 14 21:53:44.098999 containerd[1545]: time="2025-07-14T21:53:44.098993225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 14 21:53:44.100326 containerd[1545]: time="2025-07-14T21:53:44.100300950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 14 21:53:44.100849 containerd[1545]: time="2025-07-14T21:53:44.100823489Z" level=info msg="CreateContainer within sandbox \"e6ee987a4e0e66c9aab0187b7ccbb379a676b13d8844e7a7b2fc40e4eac9457a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 14 21:53:44.114146 containerd[1545]: time="2025-07-14T21:53:44.114097429Z" level=info msg="CreateContainer within sandbox \"e6ee987a4e0e66c9aab0187b7ccbb379a676b13d8844e7a7b2fc40e4eac9457a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b5d07a285ee37b4d530b91448c9dfe541a3255c4fa42d985b6f0fbb3a6f8f677\"" Jul 14 21:53:44.114950 containerd[1545]: time="2025-07-14T21:53:44.114876216Z" level=info msg="StartContainer for \"b5d07a285ee37b4d530b91448c9dfe541a3255c4fa42d985b6f0fbb3a6f8f677\"" Jul 14 21:53:44.198612 containerd[1545]: time="2025-07-14T21:53:44.198571237Z" level=info msg="StartContainer for \"b5d07a285ee37b4d530b91448c9dfe541a3255c4fa42d985b6f0fbb3a6f8f677\" returns successfully" Jul 14 21:53:44.553044 kernel: bpftool[5738]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 14 21:53:44.739318 systemd-networkd[1232]: vxlan.calico: Link UP Jul 14 21:53:44.739325 systemd-networkd[1232]: vxlan.calico: Gained carrier Jul 14 21:53:44.992253 kubelet[2635]: E0714 21:53:44.992214 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:53:44.992573 kubelet[2635]: I0714 21:53:44.992442 2635 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 21:53:45.197311 containerd[1545]: time="2025-07-14T21:53:45.197250730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:45.198750 containerd[1545]: time="2025-07-14T21:53:45.198715700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 14 21:53:45.199641 containerd[1545]: time="2025-07-14T21:53:45.199601810Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:45.201932 containerd[1545]: time="2025-07-14T21:53:45.201903368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:53:45.203334 containerd[1545]: time="2025-07-14T21:53:45.203302456Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.102967305s" Jul 14 21:53:45.203373 containerd[1545]: time="2025-07-14T21:53:45.203347938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 14 21:53:45.205665 containerd[1545]: time="2025-07-14T21:53:45.205634735Z" level=info msg="CreateContainer within sandbox \"7e6280bcec3bbb8be12cab19a56a7f96db12388e5ddcef56ead54421f1dd3068\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 14 21:53:45.219712 containerd[1545]: time="2025-07-14T21:53:45.219675053Z" level=info msg="CreateContainer within sandbox \"7e6280bcec3bbb8be12cab19a56a7f96db12388e5ddcef56ead54421f1dd3068\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5919a57ad1f5900cf708eeb4f7aededc0f3531f59188557babafc2044c37d07d\"" Jul 14 21:53:45.220164 containerd[1545]: time="2025-07-14T21:53:45.220139429Z" level=info msg="StartContainer for \"5919a57ad1f5900cf708eeb4f7aededc0f3531f59188557babafc2044c37d07d\"" Jul 14 21:53:45.309286 containerd[1545]: time="2025-07-14T21:53:45.309171219Z" level=info msg="StartContainer for \"5919a57ad1f5900cf708eeb4f7aededc0f3531f59188557babafc2044c37d07d\" returns successfully" Jul 14 21:53:45.428303 systemd[1]: Started sshd@9-10.0.0.64:22-10.0.0.1:46566.service - OpenSSH per-connection server daemon (10.0.0.1:46566). Jul 14 21:53:45.475067 sshd[5862]: Accepted publickey for core from 10.0.0.1 port 46566 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:53:45.478125 sshd[5862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:53:45.494173 systemd-logind[1524]: New session 10 of user core. Jul 14 21:53:45.500332 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 14 21:53:45.860234 kubelet[2635]: I0714 21:53:45.860191 2635 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 14 21:53:45.875246 kubelet[2635]: I0714 21:53:45.875033 2635 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 14 21:53:45.879537 sshd[5862]: pam_unix(sshd:session): session closed for user core Jul 14 21:53:45.883146 systemd[1]: sshd@9-10.0.0.64:22-10.0.0.1:46566.service: Deactivated successfully. Jul 14 21:53:45.887040 systemd[1]: session-10.scope: Deactivated successfully. Jul 14 21:53:45.887145 systemd-logind[1524]: Session 10 logged out. Waiting for processes to exit. Jul 14 21:53:45.888612 systemd-logind[1524]: Removed session 10. Jul 14 21:53:45.995123 kubelet[2635]: I0714 21:53:45.995070 2635 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 21:53:46.008048 kubelet[2635]: I0714 21:53:46.007783 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rxt6z" podStartSLOduration=28.05928002 podStartE2EDuration="32.00776567s" podCreationTimestamp="2025-07-14 21:53:14 +0000 UTC" firstStartedPulling="2025-07-14 21:53:41.25552315 +0000 UTC m=+45.617048346" lastFinishedPulling="2025-07-14 21:53:45.2040088 +0000 UTC m=+49.565533996" observedRunningTime="2025-07-14 21:53:46.007577023 +0000 UTC m=+50.369102219" watchObservedRunningTime="2025-07-14 21:53:46.00776567 +0000 UTC m=+50.369290866" Jul 14 21:53:46.008217 kubelet[2635]: I0714 21:53:46.008150 2635 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-556f958c76-mmvvm" podStartSLOduration=33.388221279 podStartE2EDuration="36.008144602s" podCreationTimestamp="2025-07-14 21:53:10 +0000 UTC" firstStartedPulling="2025-07-14 21:53:41.479716125 +0000 UTC m=+45.841241321" lastFinishedPulling="2025-07-14 21:53:44.099639408 +0000 UTC m=+48.461164644" observedRunningTime="2025-07-14 21:53:45.009653946 +0000 UTC m=+49.371179222" watchObservedRunningTime="2025-07-14 21:53:46.008144602 +0000 UTC m=+50.369669798" Jul 14 21:53:46.781165 systemd-networkd[1232]: vxlan.calico: Gained IPv6LL Jul 14 21:53:50.896318 systemd[1]: Started sshd@10-10.0.0.64:22-10.0.0.1:46584.service - OpenSSH per-connection server daemon (10.0.0.1:46584). Jul 14 21:53:50.935785 sshd[5914]: Accepted publickey for core from 10.0.0.1 port 46584 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:53:50.937229 sshd[5914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:53:50.941179 systemd-logind[1524]: New session 11 of user core. Jul 14 21:53:50.948331 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 14 21:53:51.165387 sshd[5914]: pam_unix(sshd:session): session closed for user core Jul 14 21:53:51.173720 systemd[1]: Started sshd@11-10.0.0.64:22-10.0.0.1:46596.service - OpenSSH per-connection server daemon (10.0.0.1:46596). Jul 14 21:53:51.174816 systemd[1]: sshd@10-10.0.0.64:22-10.0.0.1:46584.service: Deactivated successfully. Jul 14 21:53:51.176855 systemd[1]: session-11.scope: Deactivated successfully. Jul 14 21:53:51.178935 systemd-logind[1524]: Session 11 logged out. Waiting for processes to exit. Jul 14 21:53:51.180887 systemd-logind[1524]: Removed session 11. Jul 14 21:53:51.206830 sshd[5928]: Accepted publickey for core from 10.0.0.1 port 46596 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:53:51.208169 sshd[5928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:53:51.212779 systemd-logind[1524]: New session 12 of user core. Jul 14 21:53:51.219260 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 14 21:53:51.428256 sshd[5928]: pam_unix(sshd:session): session closed for user core Jul 14 21:53:51.442483 systemd[1]: Started sshd@12-10.0.0.64:22-10.0.0.1:46604.service - OpenSSH per-connection server daemon (10.0.0.1:46604). Jul 14 21:53:51.443321 systemd[1]: sshd@11-10.0.0.64:22-10.0.0.1:46596.service: Deactivated successfully. Jul 14 21:53:51.445810 systemd[1]: session-12.scope: Deactivated successfully. Jul 14 21:53:51.447323 systemd-logind[1524]: Session 12 logged out. Waiting for processes to exit. Jul 14 21:53:51.450266 systemd-logind[1524]: Removed session 12. Jul 14 21:53:51.477127 sshd[5942]: Accepted publickey for core from 10.0.0.1 port 46604 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:53:51.478360 sshd[5942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:53:51.482039 systemd-logind[1524]: New session 13 of user core. Jul 14 21:53:51.493325 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 14 21:53:51.645534 sshd[5942]: pam_unix(sshd:session): session closed for user core Jul 14 21:53:51.648593 systemd[1]: sshd@12-10.0.0.64:22-10.0.0.1:46604.service: Deactivated successfully. Jul 14 21:53:51.651700 systemd-logind[1524]: Session 13 logged out. Waiting for processes to exit. Jul 14 21:53:51.653116 systemd[1]: session-13.scope: Deactivated successfully. Jul 14 21:53:51.654696 systemd-logind[1524]: Removed session 13. Jul 14 21:53:53.005037 kubelet[2635]: I0714 21:53:53.004950 2635 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 21:53:55.743048 containerd[1545]: time="2025-07-14T21:53:55.742631081Z" level=info msg="StopPodSandbox for \"97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a\"" Jul 14 21:53:55.816068 containerd[1545]: 2025-07-14 21:53:55.780 [WARNING][6019] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--vz2ck-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"5f83ff6d-3a8d-4195-8362-ba2ec00150cb", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 53, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2ac8dd9c55bcadf3ef7cf1caa0028be6ac213c867c12221b2a624c78b3f4dbff", Pod:"goldmane-58fd7646b9-vz2ck", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali78f3eab06f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:53:55.816068 containerd[1545]: 2025-07-14 21:53:55.781 [INFO][6019] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" Jul 14 21:53:55.816068 containerd[1545]: 2025-07-14 21:53:55.781 [INFO][6019] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" iface="eth0" netns="" Jul 14 21:53:55.816068 containerd[1545]: 2025-07-14 21:53:55.781 [INFO][6019] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" Jul 14 21:53:55.816068 containerd[1545]: 2025-07-14 21:53:55.781 [INFO][6019] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" Jul 14 21:53:55.816068 containerd[1545]: 2025-07-14 21:53:55.800 [INFO][6031] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" HandleID="k8s-pod-network.97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" Workload="localhost-k8s-goldmane--58fd7646b9--vz2ck-eth0" Jul 14 21:53:55.816068 containerd[1545]: 2025-07-14 21:53:55.800 [INFO][6031] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:53:55.816068 containerd[1545]: 2025-07-14 21:53:55.800 [INFO][6031] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:53:55.816068 containerd[1545]: 2025-07-14 21:53:55.808 [WARNING][6031] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" HandleID="k8s-pod-network.97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" Workload="localhost-k8s-goldmane--58fd7646b9--vz2ck-eth0" Jul 14 21:53:55.816068 containerd[1545]: 2025-07-14 21:53:55.808 [INFO][6031] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" HandleID="k8s-pod-network.97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" Workload="localhost-k8s-goldmane--58fd7646b9--vz2ck-eth0" Jul 14 21:53:55.816068 containerd[1545]: 2025-07-14 21:53:55.810 [INFO][6031] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:53:55.816068 containerd[1545]: 2025-07-14 21:53:55.812 [INFO][6019] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" Jul 14 21:53:55.816756 containerd[1545]: time="2025-07-14T21:53:55.816102050Z" level=info msg="TearDown network for sandbox \"97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a\" successfully" Jul 14 21:53:55.816756 containerd[1545]: time="2025-07-14T21:53:55.816124131Z" level=info msg="StopPodSandbox for \"97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a\" returns successfully" Jul 14 21:53:55.816803 containerd[1545]: time="2025-07-14T21:53:55.816750829Z" level=info msg="RemovePodSandbox for \"97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a\"" Jul 14 21:53:55.825877 containerd[1545]: time="2025-07-14T21:53:55.825821977Z" level=info msg="Forcibly stopping sandbox \"97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a\"" Jul 14 21:53:55.892785 containerd[1545]: 2025-07-14 21:53:55.858 [WARNING][6049] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--vz2ck-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"5f83ff6d-3a8d-4195-8362-ba2ec00150cb", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 53, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2ac8dd9c55bcadf3ef7cf1caa0028be6ac213c867c12221b2a624c78b3f4dbff", Pod:"goldmane-58fd7646b9-vz2ck", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali78f3eab06f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:53:55.892785 containerd[1545]: 2025-07-14 21:53:55.859 [INFO][6049] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" Jul 14 21:53:55.892785 containerd[1545]: 2025-07-14 21:53:55.859 [INFO][6049] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" iface="eth0" netns="" Jul 14 21:53:55.892785 containerd[1545]: 2025-07-14 21:53:55.859 [INFO][6049] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" Jul 14 21:53:55.892785 containerd[1545]: 2025-07-14 21:53:55.859 [INFO][6049] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" Jul 14 21:53:55.892785 containerd[1545]: 2025-07-14 21:53:55.877 [INFO][6057] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" HandleID="k8s-pod-network.97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" Workload="localhost-k8s-goldmane--58fd7646b9--vz2ck-eth0" Jul 14 21:53:55.892785 containerd[1545]: 2025-07-14 21:53:55.877 [INFO][6057] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:53:55.892785 containerd[1545]: 2025-07-14 21:53:55.877 [INFO][6057] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:53:55.892785 containerd[1545]: 2025-07-14 21:53:55.885 [WARNING][6057] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" HandleID="k8s-pod-network.97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" Workload="localhost-k8s-goldmane--58fd7646b9--vz2ck-eth0" Jul 14 21:53:55.892785 containerd[1545]: 2025-07-14 21:53:55.886 [INFO][6057] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" HandleID="k8s-pod-network.97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" Workload="localhost-k8s-goldmane--58fd7646b9--vz2ck-eth0" Jul 14 21:53:55.892785 containerd[1545]: 2025-07-14 21:53:55.887 [INFO][6057] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:53:55.892785 containerd[1545]: 2025-07-14 21:53:55.889 [INFO][6049] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a" Jul 14 21:53:55.893237 containerd[1545]: time="2025-07-14T21:53:55.892835956Z" level=info msg="TearDown network for sandbox \"97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a\" successfully" Jul 14 21:53:55.923343 containerd[1545]: time="2025-07-14T21:53:55.923285016Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 21:53:55.923464 containerd[1545]: time="2025-07-14T21:53:55.923382978Z" level=info msg="RemovePodSandbox \"97b83900b60a796036a95be3638bd4b93c52211d9d516c88b0febdddd19dea7a\" returns successfully" Jul 14 21:53:55.923939 containerd[1545]: time="2025-07-14T21:53:55.923901754Z" level=info msg="StopPodSandbox for \"d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad\"" Jul 14 21:53:55.989682 containerd[1545]: 2025-07-14 21:53:55.956 [WARNING][6075] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58cbd4c654--nf4d4-eth0", GenerateName:"calico-kube-controllers-58cbd4c654-", Namespace:"calico-system", SelfLink:"", UID:"a3a3148e-78bd-4afa-a1ab-e95fcbbdb088", ResourceVersion:"1189", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 53, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58cbd4c654", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bf6894f86851d8f9779bcb3113fadadea4b82c265d95fdfc09c2ce590b44486f", Pod:"calico-kube-controllers-58cbd4c654-nf4d4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali09369dbf1dd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:53:55.989682 containerd[1545]: 2025-07-14 21:53:55.957 [INFO][6075] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" Jul 14 21:53:55.989682 containerd[1545]: 2025-07-14 21:53:55.957 [INFO][6075] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" iface="eth0" netns="" Jul 14 21:53:55.989682 containerd[1545]: 2025-07-14 21:53:55.957 [INFO][6075] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" Jul 14 21:53:55.989682 containerd[1545]: 2025-07-14 21:53:55.957 [INFO][6075] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" Jul 14 21:53:55.989682 containerd[1545]: 2025-07-14 21:53:55.976 [INFO][6083] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" HandleID="k8s-pod-network.d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" Workload="localhost-k8s-calico--kube--controllers--58cbd4c654--nf4d4-eth0" Jul 14 21:53:55.989682 containerd[1545]: 2025-07-14 21:53:55.976 [INFO][6083] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:53:55.989682 containerd[1545]: 2025-07-14 21:53:55.976 [INFO][6083] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:53:55.989682 containerd[1545]: 2025-07-14 21:53:55.984 [WARNING][6083] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" HandleID="k8s-pod-network.d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" Workload="localhost-k8s-calico--kube--controllers--58cbd4c654--nf4d4-eth0" Jul 14 21:53:55.989682 containerd[1545]: 2025-07-14 21:53:55.984 [INFO][6083] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" HandleID="k8s-pod-network.d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" Workload="localhost-k8s-calico--kube--controllers--58cbd4c654--nf4d4-eth0" Jul 14 21:53:55.989682 containerd[1545]: 2025-07-14 21:53:55.985 [INFO][6083] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:53:55.989682 containerd[1545]: 2025-07-14 21:53:55.987 [INFO][6075] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" Jul 14 21:53:55.990133 containerd[1545]: time="2025-07-14T21:53:55.989729218Z" level=info msg="TearDown network for sandbox \"d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad\" successfully" Jul 14 21:53:55.990133 containerd[1545]: time="2025-07-14T21:53:55.989755058Z" level=info msg="StopPodSandbox for \"d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad\" returns successfully" Jul 14 21:53:55.990281 containerd[1545]: time="2025-07-14T21:53:55.990245073Z" level=info msg="RemovePodSandbox for \"d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad\"" Jul 14 21:53:55.990281 containerd[1545]: time="2025-07-14T21:53:55.990278834Z" level=info msg="Forcibly stopping sandbox \"d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad\"" Jul 14 21:53:56.063981 containerd[1545]: 2025-07-14 21:53:56.028 [WARNING][6101] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58cbd4c654--nf4d4-eth0", GenerateName:"calico-kube-controllers-58cbd4c654-", Namespace:"calico-system", SelfLink:"", UID:"a3a3148e-78bd-4afa-a1ab-e95fcbbdb088", ResourceVersion:"1189", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 53, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58cbd4c654", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bf6894f86851d8f9779bcb3113fadadea4b82c265d95fdfc09c2ce590b44486f", Pod:"calico-kube-controllers-58cbd4c654-nf4d4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali09369dbf1dd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:53:56.063981 containerd[1545]: 2025-07-14 21:53:56.028 [INFO][6101] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" Jul 14 21:53:56.063981 containerd[1545]: 2025-07-14 21:53:56.028 [INFO][6101] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" iface="eth0" netns="" Jul 14 21:53:56.063981 containerd[1545]: 2025-07-14 21:53:56.028 [INFO][6101] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" Jul 14 21:53:56.063981 containerd[1545]: 2025-07-14 21:53:56.028 [INFO][6101] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" Jul 14 21:53:56.063981 containerd[1545]: 2025-07-14 21:53:56.047 [INFO][6110] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" HandleID="k8s-pod-network.d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" Workload="localhost-k8s-calico--kube--controllers--58cbd4c654--nf4d4-eth0" Jul 14 21:53:56.063981 containerd[1545]: 2025-07-14 21:53:56.048 [INFO][6110] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:53:56.063981 containerd[1545]: 2025-07-14 21:53:56.048 [INFO][6110] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:53:56.063981 containerd[1545]: 2025-07-14 21:53:56.057 [WARNING][6110] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" HandleID="k8s-pod-network.d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" Workload="localhost-k8s-calico--kube--controllers--58cbd4c654--nf4d4-eth0" Jul 14 21:53:56.063981 containerd[1545]: 2025-07-14 21:53:56.057 [INFO][6110] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" HandleID="k8s-pod-network.d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" Workload="localhost-k8s-calico--kube--controllers--58cbd4c654--nf4d4-eth0" Jul 14 21:53:56.063981 containerd[1545]: 2025-07-14 21:53:56.059 [INFO][6110] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:53:56.063981 containerd[1545]: 2025-07-14 21:53:56.061 [INFO][6101] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad" Jul 14 21:53:56.063981 containerd[1545]: time="2025-07-14T21:53:56.063947550Z" level=info msg="TearDown network for sandbox \"d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad\" successfully" Jul 14 21:53:56.069008 containerd[1545]: time="2025-07-14T21:53:56.068961016Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 21:53:56.069114 containerd[1545]: time="2025-07-14T21:53:56.069065900Z" level=info msg="RemovePodSandbox \"d907581f9df8c06fda6f2cad8ca8ce939987393cc65d57fd2b0cfd1ed638c5ad\" returns successfully" Jul 14 21:53:56.069632 containerd[1545]: time="2025-07-14T21:53:56.069605475Z" level=info msg="StopPodSandbox for \"de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50\"" Jul 14 21:53:56.142222 containerd[1545]: 2025-07-14 21:53:56.105 [WARNING][6128] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rxt6z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"00f8cdae-e32e-4020-9c5f-9b5051044975", ResourceVersion:"1132", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 53, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e6280bcec3bbb8be12cab19a56a7f96db12388e5ddcef56ead54421f1dd3068", Pod:"csi-node-driver-rxt6z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali890ceb509b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:53:56.142222 containerd[1545]: 2025-07-14 21:53:56.105 [INFO][6128] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" Jul 14 21:53:56.142222 containerd[1545]: 2025-07-14 21:53:56.105 [INFO][6128] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" iface="eth0" netns="" Jul 14 21:53:56.142222 containerd[1545]: 2025-07-14 21:53:56.105 [INFO][6128] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" Jul 14 21:53:56.142222 containerd[1545]: 2025-07-14 21:53:56.105 [INFO][6128] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" Jul 14 21:53:56.142222 containerd[1545]: 2025-07-14 21:53:56.126 [INFO][6138] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" HandleID="k8s-pod-network.de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" Workload="localhost-k8s-csi--node--driver--rxt6z-eth0" Jul 14 21:53:56.142222 containerd[1545]: 2025-07-14 21:53:56.126 [INFO][6138] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:53:56.142222 containerd[1545]: 2025-07-14 21:53:56.126 [INFO][6138] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:53:56.142222 containerd[1545]: 2025-07-14 21:53:56.136 [WARNING][6138] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" HandleID="k8s-pod-network.de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" Workload="localhost-k8s-csi--node--driver--rxt6z-eth0" Jul 14 21:53:56.142222 containerd[1545]: 2025-07-14 21:53:56.136 [INFO][6138] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" HandleID="k8s-pod-network.de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" Workload="localhost-k8s-csi--node--driver--rxt6z-eth0" Jul 14 21:53:56.142222 containerd[1545]: 2025-07-14 21:53:56.138 [INFO][6138] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:53:56.142222 containerd[1545]: 2025-07-14 21:53:56.140 [INFO][6128] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" Jul 14 21:53:56.142720 containerd[1545]: time="2025-07-14T21:53:56.142271719Z" level=info msg="TearDown network for sandbox \"de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50\" successfully" Jul 14 21:53:56.142720 containerd[1545]: time="2025-07-14T21:53:56.142297839Z" level=info msg="StopPodSandbox for \"de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50\" returns successfully" Jul 14 21:53:56.142795 containerd[1545]: time="2025-07-14T21:53:56.142769693Z" level=info msg="RemovePodSandbox for \"de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50\"" Jul 14 21:53:56.142820 containerd[1545]: time="2025-07-14T21:53:56.142803294Z" level=info msg="Forcibly stopping sandbox \"de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50\"" Jul 14 21:53:56.216506 containerd[1545]: 2025-07-14 21:53:56.182 [WARNING][6154] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rxt6z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"00f8cdae-e32e-4020-9c5f-9b5051044975", ResourceVersion:"1132", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 53, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7e6280bcec3bbb8be12cab19a56a7f96db12388e5ddcef56ead54421f1dd3068", Pod:"csi-node-driver-rxt6z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali890ceb509b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:53:56.216506 containerd[1545]: 2025-07-14 21:53:56.182 [INFO][6154] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" Jul 14 21:53:56.216506 containerd[1545]: 2025-07-14 21:53:56.182 [INFO][6154] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" iface="eth0" netns="" Jul 14 21:53:56.216506 containerd[1545]: 2025-07-14 21:53:56.182 [INFO][6154] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" Jul 14 21:53:56.216506 containerd[1545]: 2025-07-14 21:53:56.182 [INFO][6154] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" Jul 14 21:53:56.216506 containerd[1545]: 2025-07-14 21:53:56.202 [INFO][6163] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" HandleID="k8s-pod-network.de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" Workload="localhost-k8s-csi--node--driver--rxt6z-eth0" Jul 14 21:53:56.216506 containerd[1545]: 2025-07-14 21:53:56.202 [INFO][6163] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:53:56.216506 containerd[1545]: 2025-07-14 21:53:56.202 [INFO][6163] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:53:56.216506 containerd[1545]: 2025-07-14 21:53:56.211 [WARNING][6163] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" HandleID="k8s-pod-network.de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" Workload="localhost-k8s-csi--node--driver--rxt6z-eth0" Jul 14 21:53:56.216506 containerd[1545]: 2025-07-14 21:53:56.211 [INFO][6163] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" HandleID="k8s-pod-network.de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" Workload="localhost-k8s-csi--node--driver--rxt6z-eth0" Jul 14 21:53:56.216506 containerd[1545]: 2025-07-14 21:53:56.212 [INFO][6163] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:53:56.216506 containerd[1545]: 2025-07-14 21:53:56.214 [INFO][6154] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50" Jul 14 21:53:56.216927 containerd[1545]: time="2025-07-14T21:53:56.216536969Z" level=info msg="TearDown network for sandbox \"de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50\" successfully" Jul 14 21:53:56.222405 containerd[1545]: time="2025-07-14T21:53:56.222364019Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 21:53:56.222497 containerd[1545]: time="2025-07-14T21:53:56.222440941Z" level=info msg="RemovePodSandbox \"de51803b36a91a764aa0875a8bc3203583e2257d9e0bc8cf4b50355fa2886a50\" returns successfully" Jul 14 21:53:56.222919 containerd[1545]: time="2025-07-14T21:53:56.222895555Z" level=info msg="StopPodSandbox for \"b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d\"" Jul 14 21:53:56.290870 containerd[1545]: 2025-07-14 21:53:56.256 [WARNING][6181] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--556f958c76--mmvvm-eth0", GenerateName:"calico-apiserver-556f958c76-", Namespace:"calico-apiserver", SelfLink:"", UID:"77d9f340-8b20-4c7c-bc84-1d529d731237", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 53, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"556f958c76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e6ee987a4e0e66c9aab0187b7ccbb379a676b13d8844e7a7b2fc40e4eac9457a", Pod:"calico-apiserver-556f958c76-mmvvm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali941c34f0999", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:53:56.290870 containerd[1545]: 2025-07-14 21:53:56.256 [INFO][6181] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" Jul 14 21:53:56.290870 containerd[1545]: 2025-07-14 21:53:56.256 [INFO][6181] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" iface="eth0" netns="" Jul 14 21:53:56.290870 containerd[1545]: 2025-07-14 21:53:56.256 [INFO][6181] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" Jul 14 21:53:56.290870 containerd[1545]: 2025-07-14 21:53:56.256 [INFO][6181] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" Jul 14 21:53:56.290870 containerd[1545]: 2025-07-14 21:53:56.276 [INFO][6190] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" HandleID="k8s-pod-network.b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" Workload="localhost-k8s-calico--apiserver--556f958c76--mmvvm-eth0" Jul 14 21:53:56.290870 containerd[1545]: 2025-07-14 21:53:56.276 [INFO][6190] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:53:56.290870 containerd[1545]: 2025-07-14 21:53:56.276 [INFO][6190] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:53:56.290870 containerd[1545]: 2025-07-14 21:53:56.285 [WARNING][6190] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" HandleID="k8s-pod-network.b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" Workload="localhost-k8s-calico--apiserver--556f958c76--mmvvm-eth0" Jul 14 21:53:56.290870 containerd[1545]: 2025-07-14 21:53:56.285 [INFO][6190] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" HandleID="k8s-pod-network.b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" Workload="localhost-k8s-calico--apiserver--556f958c76--mmvvm-eth0" Jul 14 21:53:56.290870 containerd[1545]: 2025-07-14 21:53:56.286 [INFO][6190] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:53:56.290870 containerd[1545]: 2025-07-14 21:53:56.288 [INFO][6181] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" Jul 14 21:53:56.291289 containerd[1545]: time="2025-07-14T21:53:56.290918222Z" level=info msg="TearDown network for sandbox \"b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d\" successfully" Jul 14 21:53:56.291289 containerd[1545]: time="2025-07-14T21:53:56.290953543Z" level=info msg="StopPodSandbox for \"b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d\" returns successfully" Jul 14 21:53:56.291424 containerd[1545]: time="2025-07-14T21:53:56.291384236Z" level=info msg="RemovePodSandbox for \"b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d\"" Jul 14 21:53:56.291461 containerd[1545]: time="2025-07-14T21:53:56.291422237Z" level=info msg="Forcibly stopping sandbox \"b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d\"" Jul 14 21:53:56.360171 containerd[1545]: 2025-07-14 21:53:56.326 [WARNING][6208] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--556f958c76--mmvvm-eth0", GenerateName:"calico-apiserver-556f958c76-", Namespace:"calico-apiserver", SelfLink:"", UID:"77d9f340-8b20-4c7c-bc84-1d529d731237", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 53, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"556f958c76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e6ee987a4e0e66c9aab0187b7ccbb379a676b13d8844e7a7b2fc40e4eac9457a", Pod:"calico-apiserver-556f958c76-mmvvm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali941c34f0999", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:53:56.360171 containerd[1545]: 2025-07-14 21:53:56.327 [INFO][6208] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" Jul 14 21:53:56.360171 containerd[1545]: 2025-07-14 21:53:56.327 [INFO][6208] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" iface="eth0" netns="" Jul 14 21:53:56.360171 containerd[1545]: 2025-07-14 21:53:56.327 [INFO][6208] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" Jul 14 21:53:56.360171 containerd[1545]: 2025-07-14 21:53:56.327 [INFO][6208] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" Jul 14 21:53:56.360171 containerd[1545]: 2025-07-14 21:53:56.346 [INFO][6217] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" HandleID="k8s-pod-network.b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" Workload="localhost-k8s-calico--apiserver--556f958c76--mmvvm-eth0" Jul 14 21:53:56.360171 containerd[1545]: 2025-07-14 21:53:56.346 [INFO][6217] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:53:56.360171 containerd[1545]: 2025-07-14 21:53:56.346 [INFO][6217] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:53:56.360171 containerd[1545]: 2025-07-14 21:53:56.354 [WARNING][6217] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" HandleID="k8s-pod-network.b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" Workload="localhost-k8s-calico--apiserver--556f958c76--mmvvm-eth0" Jul 14 21:53:56.360171 containerd[1545]: 2025-07-14 21:53:56.354 [INFO][6217] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" HandleID="k8s-pod-network.b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" Workload="localhost-k8s-calico--apiserver--556f958c76--mmvvm-eth0" Jul 14 21:53:56.360171 containerd[1545]: 2025-07-14 21:53:56.356 [INFO][6217] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:53:56.360171 containerd[1545]: 2025-07-14 21:53:56.358 [INFO][6208] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d" Jul 14 21:53:56.362009 containerd[1545]: time="2025-07-14T21:53:56.360158926Z" level=info msg="TearDown network for sandbox \"b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d\" successfully" Jul 14 21:53:56.404448 containerd[1545]: time="2025-07-14T21:53:56.404391818Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 21:53:56.404656 containerd[1545]: time="2025-07-14T21:53:56.404463540Z" level=info msg="RemovePodSandbox \"b2f5748816c2d9ece309543fcbc7dbe4bef6efe240f5f4c351231bb2b1ed430d\" returns successfully" Jul 14 21:53:56.405153 containerd[1545]: time="2025-07-14T21:53:56.404887993Z" level=info msg="StopPodSandbox for \"5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561\"" Jul 14 21:53:56.468965 containerd[1545]: 2025-07-14 21:53:56.436 [WARNING][6235] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" WorkloadEndpoint="localhost-k8s-whisker--6dc9cb6569--s2rx5-eth0" Jul 14 21:53:56.468965 containerd[1545]: 2025-07-14 21:53:56.437 [INFO][6235] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" Jul 14 21:53:56.468965 containerd[1545]: 2025-07-14 21:53:56.437 [INFO][6235] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" iface="eth0" netns="" Jul 14 21:53:56.468965 containerd[1545]: 2025-07-14 21:53:56.437 [INFO][6235] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" Jul 14 21:53:56.468965 containerd[1545]: 2025-07-14 21:53:56.437 [INFO][6235] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" Jul 14 21:53:56.468965 containerd[1545]: 2025-07-14 21:53:56.454 [INFO][6244] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" HandleID="k8s-pod-network.5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" Workload="localhost-k8s-whisker--6dc9cb6569--s2rx5-eth0" Jul 14 21:53:56.468965 containerd[1545]: 2025-07-14 21:53:56.454 [INFO][6244] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:53:56.468965 containerd[1545]: 2025-07-14 21:53:56.454 [INFO][6244] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:53:56.468965 containerd[1545]: 2025-07-14 21:53:56.463 [WARNING][6244] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" HandleID="k8s-pod-network.5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" Workload="localhost-k8s-whisker--6dc9cb6569--s2rx5-eth0" Jul 14 21:53:56.468965 containerd[1545]: 2025-07-14 21:53:56.463 [INFO][6244] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" HandleID="k8s-pod-network.5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" Workload="localhost-k8s-whisker--6dc9cb6569--s2rx5-eth0" Jul 14 21:53:56.468965 containerd[1545]: 2025-07-14 21:53:56.465 [INFO][6244] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:53:56.468965 containerd[1545]: 2025-07-14 21:53:56.467 [INFO][6235] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" Jul 14 21:53:56.469492 containerd[1545]: time="2025-07-14T21:53:56.469369557Z" level=info msg="TearDown network for sandbox \"5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561\" successfully" Jul 14 21:53:56.469492 containerd[1545]: time="2025-07-14T21:53:56.469398958Z" level=info msg="StopPodSandbox for \"5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561\" returns successfully" Jul 14 21:53:56.469889 containerd[1545]: time="2025-07-14T21:53:56.469845931Z" level=info msg="RemovePodSandbox for \"5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561\"" Jul 14 21:53:56.470003 containerd[1545]: time="2025-07-14T21:53:56.469978735Z" level=info msg="Forcibly stopping sandbox \"5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561\"" Jul 14 21:53:56.540287 containerd[1545]: 2025-07-14 21:53:56.507 [WARNING][6263] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" WorkloadEndpoint="localhost-k8s-whisker--6dc9cb6569--s2rx5-eth0" Jul 14 21:53:56.540287 containerd[1545]: 2025-07-14 21:53:56.507 [INFO][6263] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" Jul 14 21:53:56.540287 containerd[1545]: 2025-07-14 21:53:56.507 [INFO][6263] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" iface="eth0" netns="" Jul 14 21:53:56.540287 containerd[1545]: 2025-07-14 21:53:56.507 [INFO][6263] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" Jul 14 21:53:56.540287 containerd[1545]: 2025-07-14 21:53:56.507 [INFO][6263] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" Jul 14 21:53:56.540287 containerd[1545]: 2025-07-14 21:53:56.525 [INFO][6272] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" HandleID="k8s-pod-network.5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" Workload="localhost-k8s-whisker--6dc9cb6569--s2rx5-eth0" Jul 14 21:53:56.540287 containerd[1545]: 2025-07-14 21:53:56.525 [INFO][6272] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:53:56.540287 containerd[1545]: 2025-07-14 21:53:56.525 [INFO][6272] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:53:56.540287 containerd[1545]: 2025-07-14 21:53:56.534 [WARNING][6272] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" HandleID="k8s-pod-network.5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" Workload="localhost-k8s-whisker--6dc9cb6569--s2rx5-eth0" Jul 14 21:53:56.540287 containerd[1545]: 2025-07-14 21:53:56.534 [INFO][6272] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" HandleID="k8s-pod-network.5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" Workload="localhost-k8s-whisker--6dc9cb6569--s2rx5-eth0" Jul 14 21:53:56.540287 containerd[1545]: 2025-07-14 21:53:56.536 [INFO][6272] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:53:56.540287 containerd[1545]: 2025-07-14 21:53:56.538 [INFO][6263] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561" Jul 14 21:53:56.540287 containerd[1545]: time="2025-07-14T21:53:56.540271149Z" level=info msg="TearDown network for sandbox \"5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561\" successfully" Jul 14 21:53:56.543175 containerd[1545]: time="2025-07-14T21:53:56.543130592Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 21:53:56.543265 containerd[1545]: time="2025-07-14T21:53:56.543190674Z" level=info msg="RemovePodSandbox \"5585a853fbd8159dc16b6259e6d2a3c8b8bb241877ebc41a9e55e87a08e70561\" returns successfully" Jul 14 21:53:56.543714 containerd[1545]: time="2025-07-14T21:53:56.543678088Z" level=info msg="StopPodSandbox for \"cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf\"" Jul 14 21:53:56.609317 containerd[1545]: 2025-07-14 21:53:56.577 [WARNING][6290] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--556f958c76--grqpf-eth0", GenerateName:"calico-apiserver-556f958c76-", Namespace:"calico-apiserver", SelfLink:"", UID:"ad88b19a-39ed-43ef-8d34-f24a9a9dd91a", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 53, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"556f958c76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8da0e5a8d1c866cbfd347dbf7588b0e52ef56c308f266850d4752730a6c58eab", Pod:"calico-apiserver-556f958c76-grqpf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali01930bf1fa1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:53:56.609317 containerd[1545]: 2025-07-14 21:53:56.577 [INFO][6290] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" Jul 14 21:53:56.609317 containerd[1545]: 2025-07-14 21:53:56.577 [INFO][6290] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" iface="eth0" netns="" Jul 14 21:53:56.609317 containerd[1545]: 2025-07-14 21:53:56.577 [INFO][6290] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" Jul 14 21:53:56.609317 containerd[1545]: 2025-07-14 21:53:56.577 [INFO][6290] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" Jul 14 21:53:56.609317 containerd[1545]: 2025-07-14 21:53:56.595 [INFO][6300] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" HandleID="k8s-pod-network.cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" Workload="localhost-k8s-calico--apiserver--556f958c76--grqpf-eth0" Jul 14 21:53:56.609317 containerd[1545]: 2025-07-14 21:53:56.595 [INFO][6300] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:53:56.609317 containerd[1545]: 2025-07-14 21:53:56.595 [INFO][6300] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:53:56.609317 containerd[1545]: 2025-07-14 21:53:56.604 [WARNING][6300] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" HandleID="k8s-pod-network.cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" Workload="localhost-k8s-calico--apiserver--556f958c76--grqpf-eth0" Jul 14 21:53:56.609317 containerd[1545]: 2025-07-14 21:53:56.604 [INFO][6300] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" HandleID="k8s-pod-network.cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" Workload="localhost-k8s-calico--apiserver--556f958c76--grqpf-eth0" Jul 14 21:53:56.609317 containerd[1545]: 2025-07-14 21:53:56.605 [INFO][6300] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:53:56.609317 containerd[1545]: 2025-07-14 21:53:56.607 [INFO][6290] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" Jul 14 21:53:56.609740 containerd[1545]: time="2025-07-14T21:53:56.609377848Z" level=info msg="TearDown network for sandbox \"cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf\" successfully" Jul 14 21:53:56.609740 containerd[1545]: time="2025-07-14T21:53:56.609406049Z" level=info msg="StopPodSandbox for \"cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf\" returns successfully" Jul 14 21:53:56.609912 containerd[1545]: time="2025-07-14T21:53:56.609867862Z" level=info msg="RemovePodSandbox for \"cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf\"" Jul 14 21:53:56.609912 containerd[1545]: time="2025-07-14T21:53:56.609908224Z" level=info msg="Forcibly stopping sandbox \"cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf\"" Jul 14 21:53:56.654378 systemd[1]: Started sshd@13-10.0.0.64:22-10.0.0.1:36996.service - OpenSSH per-connection server daemon (10.0.0.1:36996). Jul 14 21:53:56.687495 containerd[1545]: 2025-07-14 21:53:56.642 [WARNING][6318] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--556f958c76--grqpf-eth0", GenerateName:"calico-apiserver-556f958c76-", Namespace:"calico-apiserver", SelfLink:"", UID:"ad88b19a-39ed-43ef-8d34-f24a9a9dd91a", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 53, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"556f958c76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8da0e5a8d1c866cbfd347dbf7588b0e52ef56c308f266850d4752730a6c58eab", Pod:"calico-apiserver-556f958c76-grqpf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali01930bf1fa1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:53:56.687495 containerd[1545]: 2025-07-14 21:53:56.642 [INFO][6318] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" Jul 14 21:53:56.687495 containerd[1545]: 2025-07-14 21:53:56.642 [INFO][6318] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" iface="eth0" netns="" Jul 14 21:53:56.687495 containerd[1545]: 2025-07-14 21:53:56.642 [INFO][6318] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" Jul 14 21:53:56.687495 containerd[1545]: 2025-07-14 21:53:56.642 [INFO][6318] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" Jul 14 21:53:56.687495 containerd[1545]: 2025-07-14 21:53:56.672 [INFO][6327] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" HandleID="k8s-pod-network.cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" Workload="localhost-k8s-calico--apiserver--556f958c76--grqpf-eth0" Jul 14 21:53:56.687495 containerd[1545]: 2025-07-14 21:53:56.672 [INFO][6327] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:53:56.687495 containerd[1545]: 2025-07-14 21:53:56.672 [INFO][6327] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:53:56.687495 containerd[1545]: 2025-07-14 21:53:56.681 [WARNING][6327] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" HandleID="k8s-pod-network.cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" Workload="localhost-k8s-calico--apiserver--556f958c76--grqpf-eth0" Jul 14 21:53:56.687495 containerd[1545]: 2025-07-14 21:53:56.681 [INFO][6327] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" HandleID="k8s-pod-network.cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" Workload="localhost-k8s-calico--apiserver--556f958c76--grqpf-eth0" Jul 14 21:53:56.687495 containerd[1545]: 2025-07-14 21:53:56.683 [INFO][6327] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:53:56.687495 containerd[1545]: 2025-07-14 21:53:56.685 [INFO][6318] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf" Jul 14 21:53:56.687881 containerd[1545]: time="2025-07-14T21:53:56.687550812Z" level=info msg="TearDown network for sandbox \"cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf\" successfully" Jul 14 21:53:56.694769 containerd[1545]: time="2025-07-14T21:53:56.694719022Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 21:53:56.694875 containerd[1545]: time="2025-07-14T21:53:56.694846906Z" level=info msg="RemovePodSandbox \"cc847ba823ad45caeaef573a13148fd7e5d36a3d9513877601ccab4c04f06dcf\" returns successfully" Jul 14 21:53:56.695449 containerd[1545]: time="2025-07-14T21:53:56.695408282Z" level=info msg="StopPodSandbox for \"87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89\"" Jul 14 21:53:56.696009 sshd[6332]: Accepted publickey for core from 10.0.0.1 port 36996 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:53:56.697830 sshd[6332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:53:56.701920 systemd-logind[1524]: New session 14 of user core. Jul 14 21:53:56.709328 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 14 21:53:56.761178 containerd[1545]: 2025-07-14 21:53:56.728 [WARNING][6346] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--nqcm7-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"26306074-76d4-4748-a961-4f9fbf0ca63f", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 53, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"02e56152ee8072ff7a64f88e2e23abf13baec4e3917b92307b7178312fa72aeb", Pod:"coredns-7c65d6cfc9-nqcm7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib7a41df8775", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:53:56.761178 containerd[1545]: 2025-07-14 21:53:56.728 [INFO][6346] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" Jul 14 21:53:56.761178 containerd[1545]: 2025-07-14 21:53:56.728 [INFO][6346] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" iface="eth0" netns="" Jul 14 21:53:56.761178 containerd[1545]: 2025-07-14 21:53:56.728 [INFO][6346] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" Jul 14 21:53:56.761178 containerd[1545]: 2025-07-14 21:53:56.728 [INFO][6346] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" Jul 14 21:53:56.761178 containerd[1545]: 2025-07-14 21:53:56.746 [INFO][6357] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" HandleID="k8s-pod-network.87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" Workload="localhost-k8s-coredns--7c65d6cfc9--nqcm7-eth0" Jul 14 21:53:56.761178 containerd[1545]: 2025-07-14 21:53:56.746 [INFO][6357] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:53:56.761178 containerd[1545]: 2025-07-14 21:53:56.746 [INFO][6357] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:53:56.761178 containerd[1545]: 2025-07-14 21:53:56.755 [WARNING][6357] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" HandleID="k8s-pod-network.87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" Workload="localhost-k8s-coredns--7c65d6cfc9--nqcm7-eth0" Jul 14 21:53:56.761178 containerd[1545]: 2025-07-14 21:53:56.755 [INFO][6357] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" HandleID="k8s-pod-network.87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" Workload="localhost-k8s-coredns--7c65d6cfc9--nqcm7-eth0" Jul 14 21:53:56.761178 containerd[1545]: 2025-07-14 21:53:56.757 [INFO][6357] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:53:56.761178 containerd[1545]: 2025-07-14 21:53:56.759 [INFO][6346] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" Jul 14 21:53:56.762070 containerd[1545]: time="2025-07-14T21:53:56.761213485Z" level=info msg="TearDown network for sandbox \"87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89\" successfully" Jul 14 21:53:56.762070 containerd[1545]: time="2025-07-14T21:53:56.761236406Z" level=info msg="StopPodSandbox for \"87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89\" returns successfully" Jul 14 21:53:56.762070 containerd[1545]: time="2025-07-14T21:53:56.761664938Z" level=info msg="RemovePodSandbox for \"87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89\"" Jul 14 21:53:56.762070 containerd[1545]: time="2025-07-14T21:53:56.761696379Z" level=info msg="Forcibly stopping sandbox \"87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89\"" Jul 14 21:53:56.842552 containerd[1545]: 2025-07-14 21:53:56.795 [WARNING][6382] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--nqcm7-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"26306074-76d4-4748-a961-4f9fbf0ca63f", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 53, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"02e56152ee8072ff7a64f88e2e23abf13baec4e3917b92307b7178312fa72aeb", Pod:"coredns-7c65d6cfc9-nqcm7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib7a41df8775", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:53:56.842552 containerd[1545]: 2025-07-14 21:53:56.795 [INFO][6382] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" Jul 14 21:53:56.842552 containerd[1545]: 2025-07-14 21:53:56.795 [INFO][6382] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" iface="eth0" netns="" Jul 14 21:53:56.842552 containerd[1545]: 2025-07-14 21:53:56.795 [INFO][6382] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" Jul 14 21:53:56.842552 containerd[1545]: 2025-07-14 21:53:56.795 [INFO][6382] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" Jul 14 21:53:56.842552 containerd[1545]: 2025-07-14 21:53:56.818 [INFO][6391] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" HandleID="k8s-pod-network.87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" Workload="localhost-k8s-coredns--7c65d6cfc9--nqcm7-eth0" Jul 14 21:53:56.842552 containerd[1545]: 2025-07-14 21:53:56.819 [INFO][6391] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:53:56.842552 containerd[1545]: 2025-07-14 21:53:56.819 [INFO][6391] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:53:56.842552 containerd[1545]: 2025-07-14 21:53:56.829 [WARNING][6391] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" HandleID="k8s-pod-network.87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" Workload="localhost-k8s-coredns--7c65d6cfc9--nqcm7-eth0" Jul 14 21:53:56.842552 containerd[1545]: 2025-07-14 21:53:56.829 [INFO][6391] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" HandleID="k8s-pod-network.87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" Workload="localhost-k8s-coredns--7c65d6cfc9--nqcm7-eth0" Jul 14 21:53:56.842552 containerd[1545]: 2025-07-14 21:53:56.838 [INFO][6391] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:53:56.842552 containerd[1545]: 2025-07-14 21:53:56.840 [INFO][6382] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89" Jul 14 21:53:56.843072 containerd[1545]: time="2025-07-14T21:53:56.842590263Z" level=info msg="TearDown network for sandbox \"87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89\" successfully" Jul 14 21:53:56.848749 containerd[1545]: time="2025-07-14T21:53:56.848682561Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 21:53:56.848830 containerd[1545]: time="2025-07-14T21:53:56.848805804Z" level=info msg="RemovePodSandbox \"87129d0ecc7f2836805add89d56aeee76d3f73b27c86cb9e08a4cd038c9c2e89\" returns successfully" Jul 14 21:53:56.849274 containerd[1545]: time="2025-07-14T21:53:56.849251737Z" level=info msg="StopPodSandbox for \"0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8\"" Jul 14 21:53:56.920282 sshd[6332]: pam_unix(sshd:session): session closed for user core Jul 14 21:53:56.926219 systemd[1]: sshd@13-10.0.0.64:22-10.0.0.1:36996.service: Deactivated successfully. Jul 14 21:53:56.929250 systemd[1]: session-14.scope: Deactivated successfully. Jul 14 21:53:56.933084 systemd-logind[1524]: Session 14 logged out. Waiting for processes to exit. Jul 14 21:53:56.933932 systemd-logind[1524]: Removed session 14. Jul 14 21:53:56.934409 containerd[1545]: 2025-07-14 21:53:56.890 [WARNING][6409] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--rvt7n-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"24e0aa04-85b8-423b-8338-45073fa49cb5", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 53, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e98beffee32eaf36a36fd3eb8b220baf9fa22d47ee14a05b6daa76f44b2d2d06", Pod:"coredns-7c65d6cfc9-rvt7n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7973f2a521e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:53:56.934409 containerd[1545]: 2025-07-14 21:53:56.891 [INFO][6409] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" Jul 14 21:53:56.934409 containerd[1545]: 2025-07-14 21:53:56.891 [INFO][6409] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" iface="eth0" netns="" Jul 14 21:53:56.934409 containerd[1545]: 2025-07-14 21:53:56.891 [INFO][6409] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" Jul 14 21:53:56.934409 containerd[1545]: 2025-07-14 21:53:56.891 [INFO][6409] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" Jul 14 21:53:56.934409 containerd[1545]: 2025-07-14 21:53:56.914 [INFO][6419] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" HandleID="k8s-pod-network.0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" Workload="localhost-k8s-coredns--7c65d6cfc9--rvt7n-eth0" Jul 14 21:53:56.934409 containerd[1545]: 2025-07-14 21:53:56.914 [INFO][6419] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:53:56.934409 containerd[1545]: 2025-07-14 21:53:56.914 [INFO][6419] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:53:56.934409 containerd[1545]: 2025-07-14 21:53:56.928 [WARNING][6419] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" HandleID="k8s-pod-network.0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" Workload="localhost-k8s-coredns--7c65d6cfc9--rvt7n-eth0" Jul 14 21:53:56.934409 containerd[1545]: 2025-07-14 21:53:56.928 [INFO][6419] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" HandleID="k8s-pod-network.0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" Workload="localhost-k8s-coredns--7c65d6cfc9--rvt7n-eth0" Jul 14 21:53:56.934409 containerd[1545]: 2025-07-14 21:53:56.930 [INFO][6419] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:53:56.934409 containerd[1545]: 2025-07-14 21:53:56.932 [INFO][6409] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" Jul 14 21:53:56.934757 containerd[1545]: time="2025-07-14T21:53:56.934474428Z" level=info msg="TearDown network for sandbox \"0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8\" successfully" Jul 14 21:53:56.934757 containerd[1545]: time="2025-07-14T21:53:56.934501669Z" level=info msg="StopPodSandbox for \"0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8\" returns successfully" Jul 14 21:53:56.934968 containerd[1545]: time="2025-07-14T21:53:56.934932001Z" level=info msg="RemovePodSandbox for \"0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8\"" Jul 14 21:53:56.934996 containerd[1545]: time="2025-07-14T21:53:56.934969442Z" level=info msg="Forcibly stopping sandbox \"0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8\"" Jul 14 21:53:57.001956 containerd[1545]: 2025-07-14 21:53:56.968 [WARNING][6439] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--rvt7n-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"24e0aa04-85b8-423b-8338-45073fa49cb5", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 53, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e98beffee32eaf36a36fd3eb8b220baf9fa22d47ee14a05b6daa76f44b2d2d06", Pod:"coredns-7c65d6cfc9-rvt7n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7973f2a521e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:53:57.001956 containerd[1545]: 2025-07-14 21:53:56.968 [INFO][6439] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" Jul 14 21:53:57.001956 containerd[1545]: 2025-07-14 21:53:56.968 [INFO][6439] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" iface="eth0" netns="" Jul 14 21:53:57.001956 containerd[1545]: 2025-07-14 21:53:56.968 [INFO][6439] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" Jul 14 21:53:57.001956 containerd[1545]: 2025-07-14 21:53:56.968 [INFO][6439] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" Jul 14 21:53:57.001956 containerd[1545]: 2025-07-14 21:53:56.987 [INFO][6448] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" HandleID="k8s-pod-network.0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" Workload="localhost-k8s-coredns--7c65d6cfc9--rvt7n-eth0" Jul 14 21:53:57.001956 containerd[1545]: 2025-07-14 21:53:56.987 [INFO][6448] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:53:57.001956 containerd[1545]: 2025-07-14 21:53:56.987 [INFO][6448] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:53:57.001956 containerd[1545]: 2025-07-14 21:53:56.996 [WARNING][6448] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" HandleID="k8s-pod-network.0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" Workload="localhost-k8s-coredns--7c65d6cfc9--rvt7n-eth0" Jul 14 21:53:57.001956 containerd[1545]: 2025-07-14 21:53:56.996 [INFO][6448] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" HandleID="k8s-pod-network.0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" Workload="localhost-k8s-coredns--7c65d6cfc9--rvt7n-eth0" Jul 14 21:53:57.001956 containerd[1545]: 2025-07-14 21:53:56.997 [INFO][6448] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:53:57.001956 containerd[1545]: 2025-07-14 21:53:56.999 [INFO][6439] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8" Jul 14 21:53:57.002375 containerd[1545]: time="2025-07-14T21:53:57.002039162Z" level=info msg="TearDown network for sandbox \"0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8\" successfully" Jul 14 21:53:57.004796 containerd[1545]: time="2025-07-14T21:53:57.004759440Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 21:53:57.004876 containerd[1545]: time="2025-07-14T21:53:57.004851403Z" level=info msg="RemovePodSandbox \"0c3f86962a08fc065df22e66c90f9a9fadab4f5ec6c3994eb4d0336f0baf06f8\" returns successfully" Jul 14 21:54:01.936279 systemd[1]: Started sshd@14-10.0.0.64:22-10.0.0.1:37062.service - OpenSSH per-connection server daemon (10.0.0.1:37062). Jul 14 21:54:01.965739 sshd[6481]: Accepted publickey for core from 10.0.0.1 port 37062 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:54:01.967076 sshd[6481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:54:01.971028 systemd-logind[1524]: New session 15 of user core. Jul 14 21:54:01.986454 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 14 21:54:02.139658 sshd[6481]: pam_unix(sshd:session): session closed for user core Jul 14 21:54:02.150282 systemd[1]: Started sshd@15-10.0.0.64:22-10.0.0.1:37064.service - OpenSSH per-connection server daemon (10.0.0.1:37064). Jul 14 21:54:02.151187 systemd[1]: sshd@14-10.0.0.64:22-10.0.0.1:37062.service: Deactivated successfully. Jul 14 21:54:02.153386 systemd[1]: session-15.scope: Deactivated successfully. Jul 14 21:54:02.154961 systemd-logind[1524]: Session 15 logged out. Waiting for processes to exit. Jul 14 21:54:02.156342 systemd-logind[1524]: Removed session 15. Jul 14 21:54:02.180097 sshd[6494]: Accepted publickey for core from 10.0.0.1 port 37064 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:54:02.181433 sshd[6494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:54:02.186336 systemd-logind[1524]: New session 16 of user core. Jul 14 21:54:02.192346 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 14 21:54:02.615601 kubelet[2635]: I0714 21:54:02.615271 2635 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 21:54:04.747753 kubelet[2635]: E0714 21:54:04.747717 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:54:09.230045 sshd[6494]: pam_unix(sshd:session): session closed for user core Jul 14 21:54:09.238260 systemd[1]: Started sshd@16-10.0.0.64:22-10.0.0.1:47786.service - OpenSSH per-connection server daemon (10.0.0.1:47786). Jul 14 21:54:09.239060 systemd[1]: sshd@15-10.0.0.64:22-10.0.0.1:37064.service: Deactivated successfully. Jul 14 21:54:09.241482 systemd[1]: session-16.scope: Deactivated successfully. Jul 14 21:54:09.242374 systemd-logind[1524]: Session 16 logged out. Waiting for processes to exit. Jul 14 21:54:09.243173 systemd-logind[1524]: Removed session 16. Jul 14 21:54:09.274164 sshd[6517]: Accepted publickey for core from 10.0.0.1 port 47786 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:54:09.275371 sshd[6517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:54:09.279552 systemd-logind[1524]: New session 17 of user core. Jul 14 21:54:09.289322 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 14 21:54:18.605027 kubelet[2635]: I0714 21:54:18.604933 2635 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 21:54:20.747715 kubelet[2635]: E0714 21:54:20.747662 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:54:22.747746 kubelet[2635]: E0714 21:54:22.747713 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:54:31.058438 sshd[6517]: pam_unix(sshd:session): session closed for user core Jul 14 21:54:31.066345 systemd[1]: Started sshd@17-10.0.0.64:22-10.0.0.1:53118.service - OpenSSH per-connection server daemon (10.0.0.1:53118). Jul 14 21:54:31.066746 systemd[1]: sshd@16-10.0.0.64:22-10.0.0.1:47786.service: Deactivated successfully. Jul 14 21:54:31.071182 systemd-logind[1524]: Session 17 logged out. Waiting for processes to exit. Jul 14 21:54:31.071903 systemd[1]: session-17.scope: Deactivated successfully. Jul 14 21:54:31.082668 systemd-logind[1524]: Removed session 17. Jul 14 21:54:31.149348 sshd[6638]: Accepted publickey for core from 10.0.0.1 port 53118 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:54:31.150931 sshd[6638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:54:31.154721 systemd-logind[1524]: New session 18 of user core. Jul 14 21:54:31.161333 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 14 21:54:31.714115 sshd[6638]: pam_unix(sshd:session): session closed for user core Jul 14 21:54:31.724313 systemd[1]: Started sshd@18-10.0.0.64:22-10.0.0.1:53126.service - OpenSSH per-connection server daemon (10.0.0.1:53126). Jul 14 21:54:31.724733 systemd[1]: sshd@17-10.0.0.64:22-10.0.0.1:53118.service: Deactivated successfully. Jul 14 21:54:31.728259 systemd[1]: session-18.scope: Deactivated successfully. Jul 14 21:54:31.728879 systemd-logind[1524]: Session 18 logged out. Waiting for processes to exit. Jul 14 21:54:31.731854 systemd-logind[1524]: Removed session 18. Jul 14 21:54:31.765935 sshd[6656]: Accepted publickey for core from 10.0.0.1 port 53126 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:54:31.767219 sshd[6656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:54:31.771813 systemd-logind[1524]: New session 19 of user core. Jul 14 21:54:31.781306 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 14 21:54:31.913490 sshd[6656]: pam_unix(sshd:session): session closed for user core Jul 14 21:54:31.920079 systemd-logind[1524]: Session 19 logged out. Waiting for processes to exit. Jul 14 21:54:31.920277 systemd[1]: sshd@18-10.0.0.64:22-10.0.0.1:53126.service: Deactivated successfully. Jul 14 21:54:31.924669 systemd[1]: session-19.scope: Deactivated successfully. Jul 14 21:54:31.927406 systemd-logind[1524]: Removed session 19. Jul 14 21:54:32.748137 kubelet[2635]: E0714 21:54:32.748097 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:54:36.924254 systemd[1]: Started sshd@19-10.0.0.64:22-10.0.0.1:42806.service - OpenSSH per-connection server daemon (10.0.0.1:42806). Jul 14 21:54:36.956645 sshd[6695]: Accepted publickey for core from 10.0.0.1 port 42806 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:54:36.959740 sshd[6695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:54:36.964555 systemd-logind[1524]: New session 20 of user core. Jul 14 21:54:36.969328 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 14 21:54:37.098308 sshd[6695]: pam_unix(sshd:session): session closed for user core Jul 14 21:54:37.101797 systemd[1]: sshd@19-10.0.0.64:22-10.0.0.1:42806.service: Deactivated successfully. Jul 14 21:54:37.104328 systemd-logind[1524]: Session 20 logged out. Waiting for processes to exit. Jul 14 21:54:37.104451 systemd[1]: session-20.scope: Deactivated successfully. Jul 14 21:54:37.105688 systemd-logind[1524]: Removed session 20. Jul 14 21:54:42.113276 systemd[1]: Started sshd@20-10.0.0.64:22-10.0.0.1:42812.service - OpenSSH per-connection server daemon (10.0.0.1:42812). Jul 14 21:54:42.143087 sshd[6714]: Accepted publickey for core from 10.0.0.1 port 42812 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:54:42.144607 sshd[6714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:54:42.148527 systemd-logind[1524]: New session 21 of user core. Jul 14 21:54:42.158279 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 14 21:54:42.272282 sshd[6714]: pam_unix(sshd:session): session closed for user core Jul 14 21:54:42.279149 systemd-logind[1524]: Session 21 logged out. Waiting for processes to exit. Jul 14 21:54:42.279589 systemd[1]: sshd@20-10.0.0.64:22-10.0.0.1:42812.service: Deactivated successfully. Jul 14 21:54:42.281306 systemd[1]: session-21.scope: Deactivated successfully. Jul 14 21:54:42.282949 systemd-logind[1524]: Removed session 21. Jul 14 21:54:47.290485 systemd[1]: Started sshd@21-10.0.0.64:22-10.0.0.1:45652.service - OpenSSH per-connection server daemon (10.0.0.1:45652). Jul 14 21:54:47.320110 sshd[6751]: Accepted publickey for core from 10.0.0.1 port 45652 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:54:47.321426 sshd[6751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:54:47.325073 systemd-logind[1524]: New session 22 of user core. Jul 14 21:54:47.335235 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 14 21:54:47.453229 sshd[6751]: pam_unix(sshd:session): session closed for user core Jul 14 21:54:47.456520 systemd[1]: sshd@21-10.0.0.64:22-10.0.0.1:45652.service: Deactivated successfully. Jul 14 21:54:47.458776 systemd-logind[1524]: Session 22 logged out. Waiting for processes to exit. Jul 14 21:54:47.458859 systemd[1]: session-22.scope: Deactivated successfully. Jul 14 21:54:47.461020 systemd-logind[1524]: Removed session 22. Jul 14 21:54:52.468333 systemd[1]: Started sshd@22-10.0.0.64:22-10.0.0.1:49754.service - OpenSSH per-connection server daemon (10.0.0.1:49754). Jul 14 21:54:52.503061 sshd[6766]: Accepted publickey for core from 10.0.0.1 port 49754 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:54:52.504232 sshd[6766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:54:52.508349 systemd-logind[1524]: New session 23 of user core. Jul 14 21:54:52.518346 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 14 21:54:52.670931 sshd[6766]: pam_unix(sshd:session): session closed for user core Jul 14 21:54:52.674954 systemd[1]: sshd@22-10.0.0.64:22-10.0.0.1:49754.service: Deactivated successfully. Jul 14 21:54:52.678394 systemd[1]: session-23.scope: Deactivated successfully. Jul 14 21:54:52.680176 systemd-logind[1524]: Session 23 logged out. Waiting for processes to exit. Jul 14 21:54:52.681143 systemd-logind[1524]: Removed session 23. Jul 14 21:54:52.747950 kubelet[2635]: E0714 21:54:52.747833 2635 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"